Zippy's Archon is a customized version of Cole Medin's Archon project, featuring additional branding, a refined plugin architecture, and various enhancements for enterprise deployments.
The original Archon code is licensed under the MIT license (see LICENSE), and all credits go to Cole Medin. We have retained all references and license terms from the original repository. Any new features added by Zippy are also provided under the same open-source license.
- [Zippy Branding] UI enhancements, custom logos, theming
- [Feature A] Self-Healing Orchestrator with advanced error handling
- [Feature B] Tool Generator Sub-Agent for plugin creation
Follow the original instructions from Archon below, or see Zippy Setup for extra steps...
...
I am building a fork with these goals:
We want to evolve Archon V2 into a more robust, modular, and extensible agent framework—capable of handling multiple sub-agents, a wide variety of tools, and smarter error handling. Below is a prioritized to-do list outlining the steps needed to achieve this goal, plus details on how to proceed with the first major refactor.
-
Refactor Orchestrator Logic
- Goal: Separate high-level orchestration (the “master agent”) from the individual nodes and sub-agents currently defined in
archon_graph.py
. - Outcome: A new
orchestrator.py
to handle flow creation, memory state, and sub-agent orchestration. - Priority: High (foundation for further work).
- Goal: Separate high-level orchestration (the “master agent”) from the individual nodes and sub-agents currently defined in
-
Create a Plugin Architecture for Tools
- Goal: Move tool definitions out of
pydantic_ai_coder.py
into a dedicatedplugins/
(ortools/
) directory. - Outcome: A simple plugin loader or registry that auto-detects tool modules, making them easily reusable and extensible.
- Priority: High (enables quick addition or removal of tools).
- Goal: Move tool definitions out of
-
Implement Error Handling & Diagnostic Agent
- Goal: If a node or sub-agent fails repeatedly, pass context to a “Diagnostic Agent” that attempts self-healing or user guidance.
- Outcome: Prevention of entire flow failures; improved reliability and debugging.
- Priority: High (critical for production-ready stability).
-
Add a “Tool Generator” Sub-Agent
- Goal: Automatically generate new plugin files for external integrations (e.g., Twilio, Slack) when requested by the user.
- Outcome: Rapid expansion of capabilities and less manual coding for new services.
- Priority: Medium (depends on plugin architecture).
-
Enhance Streamlit UI / Integrate with n8n
- Goal: Provide a user-friendly interface to orchestrate tasks, show logs, and possibly visually link subflows.
- Outcome: Broader accessibility for non-developers; potential “drag-and-drop” automation via n8n nodes.
- Priority: Medium (quality-of-life improvement).
-
Database & Logging Improvements
- Goal: Add tables (e.g.,
agent_runs
) for storing conversation logs, sub-agent usage, error messages, and “lessons learned” for RAG. - Outcome: Persistent session tracking, advanced analytics, and potential continuous learning.
- Priority: Medium (helps debugging and analytics).
- Goal: Add tables (e.g.,
-
Deployment & Scaling with Proxmox
- Goal: Containerize or orchestrate (orchestrator + vector DB + crawler, etc.) in a Proxmox cluster.
- Outcome: Allows horizontal scaling, environment isolation, and easier resource allocation for heavier workloads.
- Priority: Lower (infrastructure enhancement).
The first coding step is to extract the orchestration logic out of archon_graph.py
into a new file (e.g., orchestrator.py
). This will make archon_graph.py
focus only on:
- Defining sub-agents (Reasoner, Router, Coder, etc.)
- Declaring node functions and their typed states
Meanwhile, orchestrator.py
will handle:
- Building the
StateGraph
- Compiling it with a memory saver
- Exposing methods like
start_flow()
andresume_flow()
Sample code
# orchestrator.py
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import interrupt
# Import the existing node functions & typed state from archon_graph
from archon_graph import (
define_scope_with_reasoner,
coder_agent,
finish_conversation,
route_user_message,
AgentState
)
class Orchestrator:
def __init__(self):
self.memory = MemorySaver()
self.graph = self._build_graph()
def _build_graph(self):
builder = StateGraph(AgentState)
builder.add_node("define_scope_with_reasoner", define_scope_with_reasoner)
builder.add_node("coder_agent", coder_agent)
builder.add_node("get_next_user_message", self.get_next_user_message)
builder.add_node("finish_conversation", finish_conversation)
builder.add_edge(START, "define_scope_with_reasoner")
builder.add_edge("define_scope_with_reasoner", "coder_agent")
builder.add_edge("coder_agent", "get_next_user_message")
builder.add_conditional_edges(
"get_next_user_message",
route_user_message,
{"coder_agent": "coder_agent", "finish_conversation": "finish_conversation"}
)
builder.add_edge("finish_conversation", END)
return builder.compile(checkpointer=self.memory)
def get_next_user_message(self, state: AgentState):
value = interrupt({})
return {
"latest_user_message": value
}
def start_flow(self, user_message: str):
initial_state = {
"latest_user_message": user_message,
"messages": [],
"scope": ""
}
return self.graph.run(initial_state)
def resume_flow(self, user_message: str):
return self.graph.run(user_message)
Next Steps Once this refactor is tested and stable:
Implement Task 2: Create a directory for plugins and move all existing “tools” out of pydantic_ai_coder.py. Implement Task 3: Add robust error handling and a “Diagnostic Agent” node if repeated failures occur. With these changes, Archon V2 can more easily scale, incorporate new features, and handle complex multi-agent workflows.
If you have any questions or ideas, please comment below!
[ V2 - Agentic Workflow ] Using LangGraph + Pydantic AI for multi-agent orchestration and planning
Archon is an AI meta-agent designed to autonomously build, refine, and optimize other AI agents.
It serves both as a practical tool for developers and as an educational framework demonstrating the evolution of agentic systems. Archon will be developed in iterations, starting with just a simple Pydantic AI agent that can build other Pydantic AI agents, all the way to a full agentic workflow using LangGraph that can build other AI agents with any framework. Through its iterative development, Archon showcases the power of planning, feedback loops, and domain-specific knowledge in creating robust AI agents.
The current version of Archon is V2 as mentioned above - see V2 Documentation for details.
Archon demonstrates three key principles in modern AI development:
- Agentic Reasoning: Planning, iterative feedback, and self-evaluation overcome the limitations of purely reactive systems
- Domain Knowledge Integration: Seamless embedding of frameworks like Pydantic AI and LangGraph within autonomous workflows
- Scalable Architecture: Modular design supporting maintainability, cost optimization, and ethical AI practices
- Basic RAG-powered agent using Pydantic AI
- Supabase vector database for documentation storage
- Simple code generation without validation
- Learn more about V1
- Multi-agent system with planning and execution separation
- Reasoning LLM (O3-mini/R1) for architecture planning
- LangGraph for workflow orchestration
- Support for local LLMs via Ollama
- Learn more about V2
- V3: Self-Feedback Loop - Automated validation and error correction
- V4: Tool Library Integration - Pre-built external tool incorporation
- V5: Multi-Framework Support - Framework-agnostic agent generation
- V6: Autonomous Framework Learning - Self-updating framework adapters
- Docker
- LangSmith
- MCP
- Other frameworks besides Pydantic AI
- Other vector databases besides Supabase
Since V2 is the current version of Archon, all the code for V2 is in both the archon
and archon/iterations/v2-agentic-workflow
directories.
- Python 3.11+
- Supabase account and database
- OpenAI/OpenRouter API key or Ollama for local LLMs
- Streamlit (for web interface)
- Clone the repository:
git clone https://github.com/coleam00/archon.git
cd archon
- Install dependencies:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
- Configure environment:
- Rename
.env.example
to.env
- Edit
.env
with your settings:
BASE_URL=https://api.openai.com/v1 for OpenAI, https://api.openrouter.ai/v1 for OpenRouter, or your Ollama URL LLM_API_KEY=your_openai_or_openrouter_api_key OPENAI_API_KEY=your_openai_api_key # Required for embeddings SUPABASE_URL=your_supabase_url SUPABASE_SERVICE_KEY=your_supabase_service_key PRIMARY_MODEL=gpt-4o-mini # Main agent model REASONER_MODEL=o3-mini # Planning model
- Rename
-
Set up the database:
- Execute
site_pages.sql
in your Supabase SQL Editor - This creates tables and enables vector similarity search
- Execute
-
Crawl documentation:
python crawl_pydantic_ai_docs.py
- Launch the UI:
streamlit run streamlit_ui.py
Visit http://localhost:8501
to start building AI agents!
archon_graph.py
: LangGraph workflow and agent coordinationpydantic_ai_coder.py
: Main coding agent with RAG capabilitiescrawl_pydantic_ai_docs.py
: Documentation processorstreamlit_ui.py
: Interactive web interfacesite_pages.sql
: Database schema
CREATE TABLE site_pages (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
url TEXT,
chunk_number INTEGER,
title TEXT,
summary TEXT,
content TEXT,
metadata JSONB,
embedding VECTOR(1536)
);
We welcome contributions! Whether you're fixing bugs, adding features, or improving documentation, please feel free to submit a Pull Request.
For version-specific details: