AgentDock is a framework for building sophisticated AI agents that deliver complex tasks with configurable determinism. It consists of two main components:
-
AgentDock Core: An open-source, backend-first framework for building and deploying AI agents. It's designed to be framework-agnostic and provider-independent, giving you complete control over your agent's implementation.
-
Open Source Client: A complete Next.js application that serves as a reference implementation and consumer of the AgentDock Core framework. You can see it in action at https://hub.agentdock.ai
Built with TypeScript, AgentDock emphasizes simplicity, extensibility, and configurable determinism - making it ideal for building reliable and predictable AI systems that can operate with minimal supervision.
Dr. Gregory House: A diagnostic reasoning powerhouse that orchestrates search
, deep_research
, and pubmed
tools in a multi-stage workflow to tackle complex medical cases using methodical investigation techniques that rival expert diagnosticians.
Dr.House_AgentDock.mp4
Cognitive Reasoner: Multi-stage reasoning engine that orchestrates seven specialized cognitive tools (search
, think
, reflect
, compare
, critique
, brainstorm
, debate
) in configurable workflows to systematically deconstruct and solve complex problems with human-like reasoning patterns.
Cognitive-Reasoner_AgentDock.mp4
History Mentor: Immersive educational agent combining vectorized historical knowledge with search
capabilities and dynamic Mermaid diagram rendering to create authentic learning experiences that visualize complex historical relationships and timelines on demand.
History-Mentor_AgentDock.mp4
Calorie Vision: Vision-based nutritional analysis system that combines computer vision with structured data extraction to deliver precise macro and micronutrient breakdowns from food images, functioning like a nutritionist that can instantly quantify meal composition without relying on manual input.
Calorie-Vision_AgentDock.mp4
Franรงais โข ๆฅๆฌ่ช โข ํ๊ตญ์ด โข ไธญๆ โข Espaรฑol โข Deutsch โข Italiano โข Nederlands โข Polski โข Tรผrkรงe โข ะฃะบัะฐัะฝััะบะฐ โข ฮฮปฮปฮทฮฝฮนฮบฮฌ โข ะ ัััะบะธะน โข ุงูุนุฑุจูุฉ
AgentDock is built on these core principles:
- Simplicity First: Minimal code required to create functional agents
- Node-Based Architecture: All capabilities implemented as nodes
- Tools as Specialized Nodes: Tools extend the node system for agent capabilities
- Configurable Determinism: Control the predictability of agent behavior
- Type Safety: Comprehensive TypeScript types throughout
Configurable determinism is a cornerstone of AgentDock's design philosophy, enabling you to balance creative AI capabilities with predictable system behavior:
- AgentNodes are inherently non-deterministic as LLMs may generate different responses each time
- Workflows can be made more deterministic through defined tool execution paths
- Developers can control the level of determinism by configuring which parts of the system use LLM inference
- Even with LLM components, the overall system behavior remains predictable through structured tool interactions
- This balanced approach enables both creativity and reliability in your AI applications
AgentDock fully supports the deterministic workflows you're familiar with from typical workflow builders. All the predictable execution paths and reliable outcomes you expect are available, with or without LLM inference:
flowchart LR
Input[Input] --> Process[Process]
Process --> Database[(Database)]
Process --> Output[Output]
style Input fill:#f9f9f9,stroke:#333,stroke-width:1px
style Output fill:#f9f9f9,stroke:#333,stroke-width:1px
style Process fill:#d4f1f9,stroke:#333,stroke-width:1px
style Database fill:#e8e8e8,stroke:#333,stroke-width:1px
With AgentDock, you can also leverage AgentNodes with LLMs when you need more adaptability. The creative outputs may vary based on your needs, while maintaining structured interaction patterns:
flowchart TD
Input[User Query] --> Agent[AgentNode]
Agent -->|"LLM Reasoning (Non-Deterministic)"| ToolChoice{Tool Selection}
ToolChoice -->|"Option A"| ToolA[Deep Research Tool]
ToolChoice -->|"Option B"| ToolB[Data Analysis Tool]
ToolChoice -->|"Option C"| ToolC[Direct Response]
ToolA --> Response[Final Response]
ToolB --> Response
ToolC --> Response
style Input fill:#f9f9f9,stroke:#333,stroke-width:1px
style Agent fill:#ffdfba,stroke:#333,stroke-width:1px
style ToolChoice fill:#ffdfba,stroke:#333,stroke-width:1px
style ToolA fill:#d4f1f9,stroke:#333,stroke-width:1px
style ToolB fill:#d4f1f9,stroke:#333,stroke-width:1px
style ToolC fill:#d4f1f9,stroke:#333,stroke-width:1px
style Response fill:#f9f9f9,stroke:#333,stroke-width:1px
AgentDock gives you the best of both worlds by combining non-deterministic agent intelligence with deterministic workflow execution:
flowchart TD
Input[User Query] --> Agent[AgentNode]
Agent -->|"LLM Reasoning (Non-Deterministic)"| FlowChoice{Sub-Workflow Selection}
FlowChoice -->|"Decision A"| Flow1[Deterministic Workflow 1]
FlowChoice -->|"Decision B"| Flow2[Deterministic Workflow 2]
FlowChoice -->|"Decision C"| DirectResponse[Generate Response]
Flow1 --> |"Step 1 โ 2 โ 3 โ ... โ 200"| Flow1Result[Workflow 1 Result]
Flow2 --> |"Step 1 โ 2 โ 3 โ ... โ 100"| Flow2Result[Workflow 2 Result]
Flow1Result --> Response[Final Response]
Flow2Result --> Response
DirectResponse --> Response
style Input fill:#f9f9f9,stroke:#333,stroke-width:1px
style Agent fill:#ffdfba,stroke:#333,stroke-width:1px
style FlowChoice fill:#ffdfba,stroke:#333,stroke-width:1px
style Flow1 fill:#c9e4ca,stroke:#333,stroke-width:1px
style Flow2 fill:#c9e4ca,stroke:#333,stroke-width:1px
style Flow1Result fill:#c9e4ca,stroke:#333,stroke-width:1px
style Flow2Result fill:#c9e4ca,stroke:#333,stroke-width:1px
style DirectResponse fill:#ffdfba,stroke:#333,stroke-width:1px
style Response fill:#f9f9f9,stroke:#333,stroke-width:1px
This approach enables complex multi-step workflows (potentially involving hundreds of deterministic steps implemented within tools or as connected node sequences) to be invoked by intelligent agent decisions. Each workflow executes predictably despite being triggered by non-deterministic agent reasoning.
For more advanced AI agent workflows and multi-stage processing pipelines, we're building AgentDock Pro - a powerful platform for creating, visualizing, and running complex agent systems.
Think of it like driving. Sometimes you need the AI's creativity (like navigating city streets - non-deterministic), and sometimes you need reliable, step-by-step processes (like following highway signs - deterministic). AgentDock lets you build systems that use both, choosing the right approach for each part of a task. You get the AI's smarts and predictable results where needed.
The framework is built around a powerful, modular node-based system, serving as the foundation for all agent functionality. This architecture uses distinct node types as building blocks:
BaseNode
: The fundamental class establishing the core interface and capabilities for all nodes.AgentNode
: A specialized core node orchestrating LLM interactions, tool usage, and agent logic.- Tools & Custom Nodes: Developers implement agent capabilities and custom logic as nodes extending
BaseNode
.
These nodes interact through managed registries and can be connected (leveraging the core architecture's ports and potential message bus) to enable complex, configurable, and potentially deterministic agent behaviors and workflows.
For a detailed explanation of the node system's components and capabilities, please see the Node System Documentation.
For a comprehensive guide, see the Getting Started Guide.
- Node.js โฅ 20.11.0 (LTS)
- pnpm โฅ 9.15.0 (Required)
- API keys for LLM providers (Anthropic, OpenAI, etc.)
-
Clone the Repository:
git clone https://github.com/AgentDock/AgentDock.git cd AgentDock
-
Install pnpm:
corepack enable corepack prepare pnpm@latest --activate
-
Install Dependencies:
pnpm install
For a clean reinstallation (when you need to rebuild from scratch):
pnpm run clean-install
This script removes all node_modules, lock files, and reinstalls dependencies correctly.
-
Configure Environment:
Create an environment file (
.env
or.env.local
) based on.env.example
:# Option 1: Create .env.local cp .env.example .env.local # Option 2: Create .env cp .env.example .env
Then add your API keys to the environment file.
-
Start Development Server:
pnpm dev
Click the button above to deploy the AgentDock Open Source Client directly to your Vercel account.
Capability | Description | Documentation |
---|---|---|
Session Management | Isolated, performant state management for conversations | Session Documentation |
Orchestration Framework | Control agent behavior and tool availability based on context | Orchestration Documentation |
Storage Abstraction | Flexible storage system with pluggable providers for KV, Vector, and Secure storage | Storage Documentation |
Evaluation Framework | Systematically measure and improve agent quality with diverse evaluators | Evaluation Documentation |
The storage system is currently evolving with key-value storage (Memory, Redis, Vercel KV providers) and secure client-side storage, while vector storage and additional backends are in development.
Documentation for the AgentDock framework is available at hub.agentdock.ai/docs and in the /docs/
folder of this repository. The documentation includes:
- Getting started guides
- API references
- Node creation tutorials
- Integration examples
This repository contains:
- AgentDock Core: The core framework located in
agentdock-core/
- Open Source Client: A complete reference implementation built with Next.js, serving as a consumer of the AgentDock Core framework.
- Example Agents: Ready-to-use agent configurations in the
agents/
directory
You can use AgentDock Core independently in your own applications, or use this repository as a starting point for building your own agent-powered applications.
AgentDock includes several pre-configured agent templates. Explore them in the agents/
directory or read the Agent Templates Documentation for details on configuration.
Example implementations showcase specialized use cases and advanced functionality:
Implementation | Description | Status |
---|---|---|
Orchestrated Agent | Example agent using orchestration to adapt behavior based on context | Available |
Cognitive Reasoner | Tackles complex problems using structured reasoning & cognitive tools | Available |
Agent Planner | Specialized agent for designing and implementing other AI agents | Available |
Code Playground | Sandboxed code generation and execution with rich visualization capabilities | Planned |
Generalist AI Agent | Manus-like agent that can use browser and execute complex tasks | Planned |
The AgentDock Open Source Client requires API keys for LLM providers to function. These are configured in an environment file (.env
or .env.local
) which you create based on the provided .env.example
file.
Add your LLM provider API keys (at least one is required):
# LLM Provider API Keys - at least one is required
ANTHROPIC_API_KEY=sk-ant-xxxxxxx # Anthropic API key
OPENAI_API_KEY=sk-xxxxxxx # OpenAI API key
GEMINI_API_KEY=xxxxxxx # Google Gemini API key
DEEPSEEK_API_KEY=xxxxxxx # DeepSeek API key
GROQ_API_KEY=xxxxxxx # Groq API key
The AgentDock Open Source Client follows a priority order when resolving which API key to use:
- Per-agent custom API key (set via agent settings in the UI)
- Global settings API key (set via the settings page in the UI)
- Environment variable (from .env.local or deployment platform)
Some tools also require their own API keys:
# Tool-specific API Keys
SERPER_API_KEY= # Required for search functionality
FIRECRAWL_API_KEY= # Required for deeper web search
For more details about environment configuration, see the implementation in src/types/env.ts
.
AgentDock follows a BYOK (Bring Your Own Key) model:
- Add your API keys in the settings page of the application
- Alternatively, provide keys via request headers for direct API usage
- Keys are securely stored using the built-in encryption system
- No API keys are shared or stored on our servers
This project requires the use of pnpm
for consistent dependency management. npm
and yarn
are not supported.
-
AI-Powered Applications
- Custom chatbots with any frontend
- Command-line AI assistants
- Automated data processing pipelines
- Backend service integrations
-
Integration Capabilities
- Any AI provider (OpenAI, Anthropic, etc.)
- Any frontend framework
- Any backend service
- Custom data sources and APIs
-
Automation Systems
- Data processing workflows
- Document analysis pipelines
- Automated reporting systems
- Task automation agents
Feature | Description |
---|---|
๐ Framework Agnostic (Node.js Backend) | Core library integrates with Node.js backend stacks. |
๐งฉ Modular Design | Build complex systems from simple nodes |
๐ ๏ธ Extensible | Create custom nodes for any functionality |
๐ Secure | Built-in security features for API keys and data |
๐ BYOK | Use your own API keys for LLM providers |
๐ฆ Self-Contained | Core framework has minimal dependencies |
โ๏ธ Multi-Step Tool Calls | Support for complex reasoning chains |
๐ Structured Logging | Detailed insights into agent execution |
๐ก๏ธ Robust Error Handling | Predictable behavior and simplified debugging |
๐ TypeScript First | Type safety and enhanced developer experience |
๐ Open Source Client | Complete Next.js reference implementation included |
๐ Orchestration | Dynamic control of agent behavior based on context |
๐พ Session Management | Isolated state for concurrent conversations |
๐ฎ Configurable Determinism | Balance AI creativity & predictability via node logic/workflows |
๐ Evaluation Framework | Robust tools to define, run, and analyze agent performance evaluations |
AgentDock's modular architecture is built upon these key components:
- BaseNode: The foundation for all nodes in the system
- AgentNode: The primary abstraction for agent functionality
- Tools & Custom Nodes: Callable capabilities and custom logic implemented as nodes.
- Node Registry: Manages the registration and retrieval of all node types
- Tool Registry: Manages tool availability for agents
- CoreLLM: Unified interface for interacting with LLM providers
- Provider Registry: Manages LLM provider configurations
- Evaluation Framework: Core components for agent assessment
- Error Handling: System for handling errors and ensuring predictable behavior
- Logging: Structured logging system for monitoring and debugging
- Orchestration: Controls tool availability and behavior based on conversation context
- Sessions: Manages state isolation between concurrent conversations
For detailed technical documentation on these components, see the Architecture Overview.
Below is our development roadmap for AgentDock. Most improvements listed here relate to the core AgentDock framework (agentdock-core
), which is currently developed locally and will be published as a versioned NPM package upon reaching a stable release. Some roadmap items may also involve enhancements to the open-source client implementation.
Feature | Description | Category |
---|---|---|
Storage Abstraction Layer | Flexible storage system with pluggable providers | In Progress |
Advanced Memory Systems 8FB8 td> | Long-term context management | In Progress |
Vector Storage Integration | Embedding-based retrieval for documents and memory | In Progress |
Evaluation for AI Agents | Comprehensive testing and evaluation framework | In Progress |
Platform Integration | Support for Telegram, WhatsApp, and other messaging platforms | Planned |
Multi-Agent Collaboration | Enable agents to work together | Planned |
Model Context Protocol (MCP) Integration | Support for discovering and using external tools via MCP | Planned |
Voice AI Agents | AI agents using voice interfaces and phone numbers via AgentNode | Planned |
Telemetry and Traceability | Advanced logging and performance tracking | Planned |
Workflow Runtime & Nodes | Core runtime, node types, and orchestration logic for complex automations | Planned |
AgentDock Pro | Comprehensive enterprise cloud platform for scaling AI agents & workflows | Cloud |
Natural Language AI Agent Builder | Visual builder + natural language agent and workflow construction | Cloud |
Agent Marketplace | Monetizable agent templates | Cloud |
We welcome contributions to AgentDock! Please see the CONTRIBUTING.md for detailed contribution guidelines.
AgentDock is released under the MIT License.
AgentDock provides the foundation to build almost any AI-powered application or automation you can imagine. We encourage you to explore the framework, build innovative agents, and contribute back to the community. Let's build the future of AI interaction together!