8000 GitHub - heshengtao/super-agent-party: Transform LLM API into agent API that can access knowledge repositories, networking, MCP, A2A, deep thinking, deep research, and can also be used directly through Openai API calls or on the web or desktop
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Transform LLM API into agent API that can access knowledge repositories, networking, MCP, A2A, deep thinking, deep research, and can also be used directly through Openai API calls or on the web or desktop

License

Notifications You must be signed in to change notification settings

heshengtao/super-agent-party

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

image

Introduction

๐Ÿš€ Zero-invasive, ultra-simple extension, and empower LLM API with enterprise-level capabilities without modifying a single line of code. Seamlessly attach knowledge bases, real-time internet access, MCP, A2A, deep thinking control, in-depth research, and custom tools to your LLM interface, creating a plug-and-play LLM enhancement platform.

image

Why Choose Us?

  • โœ… Efficient development: Supports streaming output, does not affect the original API's response speed, and no code changes are required
  • โœ… Quick access: Avoids repeated access to multiple service providers for a single function, pre-configured with mainstream LLM manufacturer/intelligent body protocol adapters, compatible with OpenAI/Ollama/MCP/A2A, and experience the next-generation LLM middleware instantly
  • โœ… High customization: Supports custom knowledge base, real-time networking, MCP, A2A, deep thinking control, in-depth research, custom tools, and other advanced intelligent body functions, creating a pluggable LLM enhancement platform. Customized intelligent bodies can be saved as snapshots for convenient use next time. Snapshotted intelligent bodies can be called directly using the OpenAI API.
  • โœ… Data security: Supports local knowledge base and local model access, ensuring data is not leaked and enterprise data security is maintained. All files will be cached locally and will not be uploaded anywhere.
  • โœ… Team collaboration: Supports team collaboration, multi-person sharing of knowledge base, model services, tools, MCP, A2A, and other resources, improving team collaboration efficiency. Chat records or files and images in the knowledge base are stored locally and can be used as a local file bed or image bed.

Installation Method

Windows Desktop Installation

๐Ÿ‘‰ Click to download

โญ Note! Choose to install only for the current user during installation, otherwise, administrator privileges will be required to start.

Linux Desktop Installation

We provide two mainstream Linux installation package formats for your convenience in different scenarios.

1. Install using .AppImage (Recommended)

.AppImage is a Linux application format that does not require installation and can be used immediately. Suitable for most Linux distributions.

๐Ÿ‘‰ Click to download

2. Install using .deb package (Suitable for Ubuntu/Debian systems)

๐Ÿ‘‰ Click to download

Docker Deployment (Recommended)

  • Two commands to install this project:

    docker pull ailm32442/super-agent-party:latest
    docker run -d -p 3456:3456 -v ./super-agent-data:/app/data ailm32442/super-agent-party:latest
  • Plug and play: access http://localhost:3456/

Source Code Deployment

  • Windows:

    git clone https://github.com/heshengtao/super-agent-party.git
    cd super-agent-party
    uv sync
    npm install
    start_with_dev.bat
  • Linux or Mac:

    git clone https://github.com/heshengtao/super-agent-party.git
    cd super-agent-party
    uv sync
    npm install
    chmod +x start_with_dev.sh
    ./start_with_dev.sh

For detailed deployment methods, please refer to the Deployment and Usage Documentation

Usage

  • Desktop: Click the desktop icon to use immediately.

  • Web: Access http://localhost:3456/ after startup.

  • API call: Developer-friendly, perfectly compatible with OpenAI format, can output in real-time, and does not affect the original API's response speed. No need to modify the calling code:

    from openai import OpenAI
    client = OpenAI(
      api_key="super-secret-key",
      base_url="http://localhost:3456/v1"
    )
    response = client.chat.completions.create(
      model="super-model",
      messages=[
          {"role": "user", "content": "What is Super Agent Party?"}
      ]
    )
    print(response.choices[0].message.content)

Function Introduction

  1. Switch to the calling method from the sidebar to view how to call Agent Party in OpenAI API or web mode.
  2. Knowledge base, allowing large models to answer based on information in the knowledge base. If there are multiple knowledge bases, the model will actively query the corresponding knowledge base according to the question.
  3. Internet access function, allowing large models to actively query information on the internet according to the question. Currently supported:
  • duckduckgo (completely free, inaccessible in Chinese network environment)
  • searxng (can be deployed locally with Docker)
  • tavily (requires applying for an API key)
  • jina (can be used without an API key for web scraping)
  • crawl4ai (can be deployed locally with Docker for web scraping).
  1. MCP service, allowing large models to actively call MCP services according to the question. Currently supports three calling methods: standard input/output, server-sent events (SSE), and WebSocket.
  2. A2A service, allowing large models to actively call A2A services according to the question.
  3. Deep thinking, which can transplant the reasoning ability of reasoning models to tool calls or multimodal models, allowing large models to use reasoning models for reasoning analysis before tool calls. For example, deepseek-V3 can be called by tools, but the reasoning model deepseek-R1 cannot be called by tools. Therefore, the reasoning ability of deepseek-R1 can be transplanted to deepseek-V3, allowing deepseek-V3 to use deepseek-R1 for reasoning analysis before tool calls.
  4. In-depth research, which converts user questions into tasks, analyzes and reasons step by step, calls tools, and outputs results. If the task is not completed, it will continue to analyze and reason, and call tools until the task is completed.
  5. Custom tools, any project that adapts to Ollama format or OpenAI interface can be used as a tool.
  6. Visual caching, which can be configured separately with a visual model to recognize image information, and the recognition results will be cached to save tokens. Configuring a visual model can enable some models without visual capabilities (such as most inference models) to acquire visual capabilities.

Disclaimer:

This open-source project and its content (hereinafter referred to as the "project") are for reference only and do not imply any explicit or implicit warranties. The project contributors do not assume any responsibility for the completeness, accuracy, reliability, or applicability of the project. Any behavior that relies on the project content shall be at the user's own risk. In any case, the project contributors shall not be liable for any indirect, special, or incidental losses or damages arising from the use of the project content.

License Agreement

This project uses a dual licensing model:

  1. By default, this project follows the GNU Affero General Public License v3.0 (AGPLv3) license agreement
  2. If you need to use this project for closed-source commercial purposes, you must obtain a commercial license from the project administrator

Using this project for closed-source commercial purposes without written authorization is considered a violation of this agreement. The complete text of AGPLv3 can be found in the LICENSE file in the project root directory or at gnu.org/licenses.

Support:

Join the Community

If you have any questions or issues with the project, you are welcome to join our community.

  1. QQ Group: 931057213
  1. WeChat Group: we_glm (add the assistant's WeChat and join the group)

  2. Discord: Discord link

Follow Us

  1. If you want to keep up with the latest features of this project, please follow the Bilibili account: Pai Jiang

Donate

If my work has brought value to you, please consider buying me a cup of coffee! Your support not only injects vitality into the project but also warms the creator's heart. โ˜•๐Ÿ’– Every cup counts!

About

Transform LLM API into agent API that can access knowledge repositories, networking, MCP, A2A, deep thinking, deep research, and can also be used directly through Openai API calls or on the web or desktop

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published
0