-
Notifications
You must be signed in to change notification settings - Fork 947
Add Venice.ai as provider #2239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
enhancement
New feature or request
Comments
faces-of-eth
added a commit
to faces-of-eth/goose
that referenced
this issue
Apr 17, 2025
Implements Venice.ai as a new model provider with support for various models including: - llama-3.3-70b (65K context) - qwen-2.5-coder-32b for code optimization - qwen-2.5-vl for vision capabilities - Additional specialized models The implementation includes: - Live model list fetching for new model releases - Full provider API integration - UI configuration settings - Improved error handling for model fetching - Support for Venice.ai's authentication system Closes block#2239
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Please explain the motivation behind the feature request
We need to add Venice.ai as a provider to enhance our AI capabilities while maintaining strong privacy guarantees and transparency. Venice.ai offers several unique advantages:
This addition would give users more control over their data while maintaining high-quality AI capabilities.
Venice is also Agent-first, which is a good fit for Goose.
Autonomous AI Agents can programmatically access Venice.ai’s APIs without any human interaction using the “api_keys” endpoint. AI Agents are now able to manage their own wallets on the BASE blockchain, allowing them to programmatically acquire and stake VVV token to earn a daily VCU inference allocation. Venice’s new API endpoint allows them to automate further by generating their own API key.
Describe the solution you'd like
Implement Venice.ai as a provider with the following features:
llama-3.3-70b
(65K context),llama-3.1-405b
(65K context)qwen-2.5-coder-32b
(32K context)qwen-2.5-vl
(32K context)llama-3.2-3b
(131K context)deepseek-r1-671b
(131K context)Describe alternatives you've considered
Additional context
Key Technical Details:
Implementation Priority:
Model Capabilities Overview:
The text was updated successfully, but these errors were encountered: