-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model #2986
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…17b-128e-instruct-fp8 model - Added watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 to the watsonx models list in constants.py - Created comprehensive tests to verify CLI model selection and LLM instantiation - All existing tests continue to pass with no regressions - Fixes CLI validation error when users try to select this model for watsonx provider Resolves #2984 Co-Authored-By: João <joao@crewai.com>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #2986OverviewThe PR effectively adds support for the Files Modified
Detailed Analysis1. Changes in constants.pyStrengths:
Suggestions for Improvement:
2. Analysis of test_watsonx_model_support.pyStrengths:
Areas for Improvement:
General Recommendations
ConclusionThe PR accomplishes its goals with sufficient test coverage and clarity in implementation. However, addressing the suggested improvements—particularly focusing on type safety, testing robustness, and documentation—will enhance the overall quality and maintainability of the code. After addressing these recommendations, the changes will be ready for merge. |
- Removed unused pytest import - Removed unused MagicMock import - All tests continue to pass locally - Addresses CI lint check failure Co-Authored-By: João <joao@crewai.com>
- Added type hints to all test functions and variables for better maintainability - Added parametrized test to verify multiple watsonx models can be selected - Imported pytest for parametrized testing functionality - All tests continue to pass locally (6 passed) - Addresses review suggestions from joaomdmoura Co-Authored-By: João <joao@crewai.com>
Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model
Summary
This PR fixes issue #2984 by adding support for the
watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8
model to the watsonx provider in CrewAI. The issue occurred when users tried to select this model through the CLI crew creation process, but it wasn't available in the predefined models list.Changes Made
src/crewai/cli/constants.py
tests/cli/test_watsonx_model_support.py
to verify:Root Cause Analysis
The issue was in the CLI validation mechanism. When users create crews through the CLI:
create_crew.py
checks if the provider has predefined models in theMODELS
constantprovider.py
usesMODELS.get(provider, [])
to get available models for selectionThe
meta-llama/llama-4-maverick-17b-128e-instruct-fp8
model was missing from the watsonx models list, causing the validation to fail.Testing
uv run pytest tests/llm_test.py -v
with 37 passed, 1 skippeduv run pytest tests/cli/test_watsonx_model_support.py -v
with 3 passedFiles Changed
src/crewai/cli/constants.py
- Added the new model to watsonx models listtests/cli/test_watsonx_model_support.py
- New test file with comprehensive coverageLink to Devin run
https://app.devin.ai/sessions/dbb7a09247e74de88aa36166cd96f6e1
Requested by
João (joao@crewai.com)
Resolves #2984