-
Notifications
You must be signed in to change notification settings - Fork 316
[Bug]: Gemini and group chat #1808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It works with: llm_config = LLMConfig(api_type="google", model="gemini-2.0-flash") sorry if it's because of my google/api setup! |
It is indeed because of API setup, thanks @franciscovillaescusa. My group chat workflows behave well only if I use gemini through vertexai. To do so I did: export GEMINI_API_KEY=""
export GOOGLE_API_KEY=""
export GOOGLE_GEMINI_API_KEY="" and set export GOOGLE_APPLICATION_CREDENTIALS="/path/to/gemini.json" You can follow these instructions to get the gemini.json setup. where gemini.json contains:
in bashrc. Closed the shell, opened a new one. If i am not mistaken, the way messages are processed with use_vertexai=True, is different from the way they are processed with use_vertexai=False? e.g.: if self.use_vertexai:
model = GenerativeModel(
model_name,
generation_config=GenerationConfig(**generation_config),
safety_settings=safety_settings,
system_instruction=system_instruction,
tool_config=tool_config,
tools=tools,
)
chat = model.start_chat(history=gemini_messages[:-1], response_validation=response_validation)
response = chat.send_message(gemini_messages[-1].parts, stream=stream, safety_settings=safety_settings)
else:
client = genai.Client(api_key=self.api_key, http_options=http_options)
generate_content_config = GenerateContentConfig(
safety_settings=safety_settings,
system_instruction=system_instruction,
tools=tools,
tool_config=tool_config,
**generation_config,
)
chat = client.chats.create(model=model_name, config=generate_content_config, history=gemini_messages[:-1])
response = chat.send_message(message=gemini_messages[-1].parts) for me things work well only with vertexai. |
Describe the bug
Run:
seems to be crashing with gemini models.
Error is either:
or
Steps to reproduce
No response
Model Used
No response
Expected Behavior
No response
Screenshots and logs
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered: