ACP is an open protocol for communication between AI agents, applications, and humans.
Modern AI agents are often built in isolation, across different frameworks, teams, and infrastructures. This fragmentation slows innovation and makes it harder for agents to work together effectively. ACP solves this by enabling agents to communicate and coordinate using multimodal messages.
ACP enables agents to:
- Send and receive rich messages — like text, code, files, or media
- Respond in real time, in the background, or as a stream
- Let others discover what they can do
- Collaborate on long-running tasks
- Share state with each other when needed
ACP powers agent communication on the BeeAI Platform — a place where you can discover, run, and share agents.
Take the hands-on introduction to ACP in this DeepLearning.AI short course:
- 🧭 Trajectory Metadata - Enhanced MessagePart with TrajectoryMetadata for tracking multi-step reasoning and tool calling
- 🌐 Distributed Sessions - Session continuity across multiple server instances using URI-based resource sharing
- 🔍 RAG LlamaIndex Agent - New example agent demonstrating Retrieval-Augmented Generation with LlamaIndex
- 📚 Citation Metadata - Enhanced MessagePart with CitationMetadata for improved source tracking and attribution
- ⚡ High Availability Support - Deploy ACP servers with centralized storage (Redis/PostgreSQL) for scalable, fault-tolerant setups
- 📝 Message Role Parameter - Added
role
parameter to Message structure for better agent identification - 🔄 TypeScript SDK (Client) - Full TypeScript client library for interacting with ACP agents
- 📚 Documentation. Comprehensive guides and reference material for implementing and using ACP.
- 📝 OpenAPI Specification. Defines the REST API endpoints, request/response formats, and data models to form the ACP protocol.
- 🛠️ Python SDK. Contains a server implementation, client libraries, and model definitions to easily create and interact with ACP agents.
- 🛠️ TypeScript SDK. Contains client libraries and model definitions to easily interact with ACP agents.
- 💻 Examples. Ready-to-run code samples demonstrating how to build agents and clients that communicate using ACP.
Concept | Description |
---|---|
Agent Manifest | A model describing an agent's capabilities—its name, description, and optional metadata and status—for discovery and composition without exposing implementation details. |
Run | A single agent execution with specific inputs. Supports sync or streaming, with intermediate and final output. |
Message | The core structure for communication, consisting of a sequence of ordered components that form a complete, structured, and multi-modal exchange of information. |
MessagePart | The individual content units within a Message , which can include types like text, image, or JSON. Together, they combine to create structured, multimodal communication. |
Await | Let agents pause to request information from the client and resume, enabling interactive exchanges where the agent can wait for external input (data, actions, etc.) before continuing. |
Sessions | Enable agents to maintain state and conversation history across multiple interactions using session identifiers. The SDK automatically manages session state, allowing agents to access complete interaction history within a session. |
Note
This guide uses uv
. See the uv
primer for more details.
1. Initialize your project
uv init --python '>=3.11' my_acp_project
cd my_acp_project
2. Add the ACP SDK
uv add acp-sdk
3. Create an agent
Let's create a simple "echo agent" that returns any message it receives.
Create an agent.py
file in your project directory with the following code:
# agent.py
import asyncio
from collections.abc import AsyncGenerator
from acp_sdk.models import Message
from acp_sdk.server import Context, RunYield, RunYieldResume, Server
server = Server()
@server.agent()
async def echo(
input: list[Message], context: Context
) -> AsyncGenerator[RunYield, RunYieldResume]:
"""Echoes everything"""
for message in input:
await asyncio.sleep(0.5)
yield {"thought": "I should echo everything"}
await asyncio.sleep(0.5)
yield message
server.run()
4. Start the ACP server
uv run agent.py
Your server should now be running at http://localhost:8000.
5. Verify your agent is available
In another terminal, run the following curl
command:
curl http://localhost:8000/agents
You should see a JSON response containing your echo
agent, confirming it's available:
{
"agents": [
{ "name": "echo", "description": "Echoes everything", "metadata": {} }
]
}
6. Run the agent via HTTP
Run the following curl
command:
curl -X POST http://localhost:8000/runs \
-H "Content-Type: application/json" \
-d '{
"agent_name": "echo",
"input": [
{
"role": "user",
"parts": [
{
"content": "Howdy!",
"content_type": "text/plain"
}
]
}
]
}'
Your response should include the echoed message "Howdy!":
{
"run_id": "44e480d6-9a3e-4e35-8a03-faa759e19588",
"agent_name": "echo",
"session_id": "b30b1946-6010-4974-bd35-89a2bb0ce844",
"status": "completed",
"await_request": null,
"output": [
{
"role": "agent/echo",
"parts": [
{
"name": null,
"content_type": "text/plain",
"content": "Howdy!",
"content_encoding": "plain",
"content_url": null
}
]
}
],
"error": null
}
7. Build an ACP client
Here's a simple ACP client to interact with your echo
agent.
Create a client.py
file in your project directory with the following code:
# client.py
import asyncio
from acp_sdk.client import Client
from acp_sdk.models import Message, MessagePart
async def example() -> None:
async with Client(base_url="http://localhost:8000") as client:
run = await client.run_sync(
agent="echo",
input=[
Message(
parts=[MessagePart(content="Howdy to echo from client!!", content_type="text/plain")]
)
],
)
print(run.output)
if __name__ == "__main__":
asyncio.run(example())
8. Run the ACP client
uv run client.py
You should see the echoed response printed to your console. 🎉
We are grateful for the efforts of our initial contributors, who have played a vital role in getting ACP off the ground. As we continue to grow and evolve, we invite others to join our vibrant community and contribute to our project's ongoing development. For more information, please visit the Contribute page of our documentation.
For information about maintainers, see MAINTAINERS.md.
Developed by contributors to the BeeAI project, this initiative is part of the Linux Foundation AI & Data program. Its development follows open, collaborative, and community-driven practices.