8000 [Bug]: Gemini and group chat · Issue #1808 · ag2ai/ag2 · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[Bug]: Gemini and group chat #1808

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
borisbolliet opened this issue May 8, 2025 · 2 comments
Closed

[Bug]: Gemini and group chat #1808

borisbolliet opened this issue May 8, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@borisbolliet
Copy link
Collaborator

Describe the bug

Run:

from typing import Annotated
from autogen import ConversableAgent, LLMConfig
from autogen.agentchat import initiate_group_chat
from autogen.agentchat.group.patterns import AutoPattern
from autogen.agentchat.group import ReplyResult, AgentNameTarget

# Define the query classification tool
def classify_query(
    query: Annotated[str, "The user query to classify"]
) -> ReplyResult:
    """Classify a user query as technical or general."""
    technical_keywords = ["error", "bug", "broken", "crash", "not working", "shutting down"]

    if any(keyword in query.lower() for keyword in technical_keywords):
        return ReplyResult(
            message="This appears to be a technical issue. Routing to technical support...",
            target=AgentNameTarget("tech_agent")
        )
    else:
        return ReplyResult(
            message="This appears to be a general question. Routing to general support...",
            target=AgentNameTarget("general_agent")
        )

# Create the agents
# llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")
# llm_config = LLMConfig(api_type="google", model="gemini-2.5-pro-exp-03-25")
llm_config = LLMConfig(api_type="google", model="gemini-2.5-flash-preview-04-17")
with llm_config:
    triage_agent = ConversableAgent(
        name="triage_agent",
        system_message="""You are a triage agent. For each user query,
        identify whether it is a technical issue or a general question.
        Use the classify_query tool to categorize queries and route them appropriately.
        Do not provide suggestions or answers, only route the query.""",
        functions=[classify_query]
    )

    tech_agent = ConversableAgent(
        name="tech_agent",
        system_message="""You solve technical problems like software bugs
        and hardware issues. After analyzing the problem, use the provide_technical_solution
        tool to format your response consistently."""
    )

    general_agent = ConversableAgent(
        name="general_agent",
        system_message="You handle general, non-technical support questions."
    )

# User agent
user = ConversableAgent(
    name="user",
    human_input_mode="ALWAYS"
)

# Set up the conversation pattern
pattern = AutoPattern(
    initial_agent=triage_agent,
    agents=[triage_agent, tech_agent, general_agent],
    user_agent=user,
    group_manager_args={"llm_config": llm_config}
)

# Run the chat
result, context, last_agent = initiate_group_chat(
    pattern=pattern,
    messages="My laptop keeps shutting down randomly. Can you help?",
    max_rounds=10
)

seems to be crashing with gemini models.

Error is either:

AttributeError: Response candidate content part has no text.

or

google.api_core.exceptions.InternalServerError: 500 Internal error encountered.

Steps to reproduce

No response

Model Used

No response

Expected Behavior

No response

Screenshots and logs

No response

Additional Information

No response

@borisbolliet borisbolliet added the bug Something isn't working label May 8, 2025
@borisbolliet
Copy link
Collaborator Author

It works with:

llm_config = LLMConfig(api_type="google", model="gemini-2.0-flash")

sorry if it's because of my google/api setup!

@borisbolliet
Copy link
Collaborator Author
borisbolliet commented May 8, 2025

It is indeed because of API setup, thanks @franciscovillaescusa.

My group chat workflows behave well only if I use gemini through vertexai.

To do so I did:

export GEMINI_API_KEY=""
export GOOGLE_API_KEY=""
export GOOGLE_GEMINI_API_KEY=""

and set

export GOOGLE_APPLICATION_CREDENTIALS="/path/to/gemini.json"

You can follow these instructions to get the gemini.json setup.

where gemini.json contains:

{
  "type": "service_account",
  "project_id": "xyz",
  "private_key_id": "xyz",
  "private_key": "-----BEGIN PRIVATE KEY-----\nxyz\n-----END PRIVATE KEY-----\n",
  "client_email": "xyz@xyz.iam.gserviceaccount.com",
  "client_id": "123",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/xyz.iam.gserviceaccount.com",
  "universe_domain": "googleapis.com"
}

in bashrc. Closed the shell, opened a new one.

If i am not mistaken, the way messages are processed with use_vertexai=True, is different from the way they are processed with use_vertexai=False?

e.g.:

        if self.use_vertexai:
            model = GenerativeModel(
                model_name,
                generation_config=GenerationConfig(**generation_config),
                safety_settings=safety_settings,
                system_instruction=system_instruction,
                tool_config=tool_config,
                tools=tools,
            )

            chat = model.start_chat(history=gemini_messages[:-1], response_validation=response_validation)
            response = chat.send_message(gemini_messages[-1].parts, stream=stream, safety_settings=safety_settings)
        else:
            client = genai.Client(api_key=self.api_key, http_options=http_options)
            generate_content_config = GenerateContentConfig(
                safety_settings=safety_settings,
                system_instruction=system_instruction,
                tools=tools,
                tool_config=tool_config,
                **generation_config,
            )
            chat = client.chats.create(model=model_name, config=generate_content_config, history=gemini_messages[:-1])
            response = chat.send_message(message=gemini_messages[-1].parts)

for me things work well only with vertexai.
closing the issue with that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant
0