8000 Releases · heshengtao/super-agent-party · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Releases: heshengtao/super-agent-party

(v0.1.3) Optimize automatic updates, automatic packaging, and persistent storage

14 May 06:57
6223b6f
Compare
Choose a tag to compare
  1. 优化了自动打包流程,目前发行版会包含windows和Linux版本,macOS暂不支持,macOS系统建议docker部署
  2. 优化了自动更新
  3. 实现了持久化存储,更新后仍然会保留原有配置和上传过的文件
  4. 添加了知识库的检索时机的选项,可以选择主动检索或者被动检索知识库

  1. Optimized the automatic packaging process, currently the release version includes Windows and Linux versions, macOS is not supported, and it is recommended to deploy macOS systems with Docker.
  2. Optimized automatic updates.
  3. Implemented persistent storage, which retains the original configuration and uploaded files after updates.
  4. Added an option for the timing of knowledge base retrieval, allowing users to choose between active or passive retrieval of the knowledge base.

(v0.1.2) New UI and loading interface; visual caching; customizable LLM tools

08 May 16:53
d15a8e9
Compare
Choose a tag to compare
  1. 自定义LLM工具,任何适配ollama格式或者openai接口的项目,都可以作为工具使用。例如:comfyui LLM party 、lightrag等
  2. 视觉缓存,可以单独配置视觉模型,用于识别图片信息,识别的结果将被缓存,以节省token。配置视觉模型可以让一些无视觉能力的模型(例如大部分的推理模型等)获得视觉能力。
  3. 新的UI和加载界面。
  4. 新的工具:Pollinations 图像生成
  5. 添加了系统提示词编辑以及对话列表的快捷按钮(复制、编辑、删除)

  1. Customizable LLM tools, any project that adapts to the ollama format or OpenAI interface can be used as a tool. For example: comfyui LLM party, lightrag, etc.
  2. Visual cache, allowing separate configuration of visual models to recognize image information, with recognized results being cached to save tokens. Configuring visual models can enable models without visual capabilities (such as most inference models) to acquire visual capabilities.
  3. New UI and loading interface.
  4. New tool: Pollinations image generation
  5. Added system prompt editing and conversation list shortcut buttons (copy, edit, delete)

(v0.1.1) A2A services; HTML and Mermaid format preview

28 Apr 07:04
674530a
Compare
Choose a tag to compare
  1. 支持了A2A服务,可以调用A2A服务。
  2. 支持了HTML以及mermaid格式的预览功能。
  3. 完善工具智能体,在智能体界面构建的智能体可以被当做工具使用。
  4. 可以查询对话历史记录了。

  1. Support for the A2A service has been added, allowing the invocation of the A2A service.
  2. Support for previewing HTML and Mermaid formats has been added.
  3. Improve the tool intelligence agent, so that the intelligence agent built in the agent interface can be used as a tool.
  4. You can now query the chat history.

v0.1.0

19 Apr 04:35
Compare
Choose a tag to compare

super-agent-party : v0.1.0

简介

如果你想要让一个大模型变成一个智能体,接入知识库、联网、MCP服务、深度思考、深度研究,并且还能够通过Openai API调用或web端、桌面端直接使用,那么这个项目就是为你准备的。

功能

  1. 知识库,让大模型能够根据知识库中的信息进行回答。如果有多个知识库,模型会根据提问需求去主动查询对应的知识库。
  2. 联网功能,让大模型能够根据提问需求去主动联网查询信息。目前已支持:
  • duckduckgo(完全免费,中国网络环境无法访问)
  • searxng(可以docker本地部署)
  • tavily(需要申请api key)
  • jina(可以无需api key,用于网页抓取)
  • crawl4ai(可以docker本地部署,用于网页抓取)。
  1. MCP服务,让大模型能够根据提问需求去主动调用MCP服务。目前支持两种调用方式:标准输入输出和服务器发送事件 (SSE)。
  2. 深度思考,可以将推理模型的推理能力移植到可以工具调用或多模态模型中,让大模型在工具调用之前先利用推理模型进行推理分析。例如:deepseek-V3可以工具调用,但是推理模型deepseek-R1无法工具调用,那么就可以将deepseek-R1的推理能力移植到deepseek-V3中,让deepseek-V3在工具调用之前先利用deepseek-R1进行推理分析。
  3. 深度研究,将用户的问题转化成任务,逐步分析推理后调用工具,输出结果后会重新检查任务是否完成,如果任务未完成,则继续分析推理后调用工具,直到任务完成。

Introduction

If you want to transform a large model into an intelligent agent that can access knowledge bases, connect to the internet, utilize MCP services, perform in-depth thinking and research, and also be accessible via the OpenAI API or directly through web and desktop applications, then this project is designed for you.

Features

  1. Knowledge Base: Enables the large model to respond based on information contained within the knowledge base. If there are multiple knowledge bases, the model will proactively query the corresponding knowledge base according to the question's requirements.
  2. Internet Connectivity: Allows the large model to proactively search for information online based on question requirements. Currently supported options include:
    • duckduckgo (completely free, but not accessible in the Chinese network environment)
    • searxng (can be deployed locally using Docker)
    • tavily (requires applying for an API key)
    • jina (can be used without an API key for web scraping)
    • crawl4ai (can be deployed locally using Docker for web scraping).
  3. MCP Service: The large model can proactively invoke MCP services according to question requirements. Currently supports two invocation methods: standard input/output and Server-Sent Events (SSE).
  4. In-depth Thinking: Integrates the reasoning capabilities of reasoning models into tool-invoking or multi-modal models, allowing the large model to conduct reasoning analysis before invoking tools. For example, if deepseek-V3 can invoke tools but the reasoning model deepseek-R1 cannot, the reasoning capability of deepseek-R1 can be integrated into deepseek-V3 so it can use deepseek-R1 for reasoning analysis before invoking tools.
  5. In-depth Research: Transforms user questions into tasks, which are progressively analyzed and reasoned about, followed by invoking tools. After outputting results, it rechecks whether the task is completed. If not, it continues analyzing and reasoning until the task is accomplished.
0