8000 Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model by devin-ai-integration[bot] · Pull Request #2986 · crewAIInc/crewAI · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model #2986

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

devin-ai-integration[bot]
Copy link
Contributor

Fix issue #2984: Add support for watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model

Summary

This PR fixes issue #2984 by adding support for the watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model to the watsonx provider in CrewAI. The issue occurred when users tried to select this model through the CLI crew creation process, but it wasn't available in the predefined models list.

Changes Made

  • Added the missing model to the watsonx models list in src/crewai/cli/constants.py
  • Created comprehensive tests in tests/cli/test_watsonx_model_support.py to verify:
    • Model is included in the watsonx models list
    • Model can be selected through CLI provider selection
    • Model list maintains proper ordering
  • Verified LLM instantiation works with the new model

Root Cause Analysis

The issue was in the CLI validation mechanism. When users create crews through the CLI:

  1. create_crew.py checks if the provider has predefined models in the MODELS constant
  2. provider.py uses MODELS.get(provider, []) to get available models for selection
  3. If the model isn't in the list, users cannot select it through the CLI interface

The meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model was missing from the watsonx models list, causing the validation to fail.

Testing

  • All existing tests pass - ran uv run pytest tests/llm_test.py -v with 37 passed, 1 skipped
  • New tests pass - ran uv run pytest tests/cli/test_watsonx_model_support.py -v with 3 passed
  • Manual verification - created reproduction script that confirms:
    • Model is found in watsonx models list
    • Model can be selected through CLI
    • LLM can be instantiated with the model
  • No regressions - full test suite continues to pass

Files Changed

  • src/crewai/cli/constants.py - Added the new model to watsonx models list
  • tests/cli/test_watsonx_model_support.py - New test file with comprehensive coverage

Link to Devin run

https://app.devin.ai/sessions/dbb7a09247e74de88aa36166cd96f6e1

Requested by

João (joao@crewai.com)

Resolves #2984

…17b-128e-instruct-fp8 model

- Added watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 to the watsonx models list in constants.py
- Created comprehensive tests to verify CLI model selection and LLM instantiation
- All existing tests continue to pass with no regressions
- Fixes CLI validation error when users try to select this model for watsonx provider

Resolves #2984

Co-Authored-By: João <joao@crewai.com>
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment for PR #2986

Overview

The PR effectively adds support for the watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8 model to the CrewAI framework, including necessary constants and robust test coverage for the new model.

Files Modified

  1. src/crewai/cli/constants.py
  2. tests/cli/test_watsonx_model_support.py (new file)

Detailed Analysis

1. Changes in constants.py

Strengths:

  • The addition of the new model is clean and integrates well with the existing codebase.
  • Maintains alphabetical ordering and follows the established naming conventions, contributing to code consistency and readability.

Suggestions for Improvement:

  1. Documentation Enhancements:
    • It is recommended to include a comment documenting the model's specifications directly in the constants file for future reference and maintainability:
    MODELS = {
        "watson": [
            # Llama 4 Maverick model - 17B parameters, 128k context, FP8 quantization
            "watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8",
        ],
    }

2. Analysis of test_watsonx_model_support.py

Strengths:

  • Demonstrates comprehensive test coverage and a well-structured testing approach.
  • Utilizes clear and descriptive test case names which facilitate understanding of the tests’ purposes.

Areas for Improvement:

  1. Type Hints:

    • Incorporating type hints can enhance code maintainability and clarity:
    from typing import List
    
    def test_watsonx_models_include_llama4_maverick() -> None:
        watsonx_models: List[str] = MODELS.get("watson", [])
        assert "watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8" in watsonx_models
  2. Error Case Testing:

    • It would be prudent to add tests for invalid model selections to ensure robustness against misuse:
    def test_select_model_watsonx_invalid_model() -> None:
        """Test handling of invalid model selection for watsonx provider."""
        provider = "watson"
        provider_models = {}
        
        with patch("crewai.cli.provider.select_choice") as mock_select_choice:
            mock_select_choice.return_value = "invalid_model"
            
            with pytest.raises(ValueError):
                select_model(provider, provider_models)
  3. Utilize Parametrized Tests:

    • Adding parametrized tests can streamline repetitive test scenarios and improve overall coverage:
    @pytest.mark.parametrize("model_name", [
        "watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8",
        "watsonx/mistral/mistral-large",
    ])
    def test_watsonx_model_selection(model_name: str) -> None:
        provider = "watson"
        provider_models = {}
        
        with patch("crewai.cli.provider.select_choice") as mock_select_choice:
            mock_select_choice.return_value = model_name
            result = select_model(provider, provider_models)
            assert result == model_name
  4. Integration Test:

    • Consider adding an integration test for the model initialization to validate that the model operates correctly in the larger system context:
    @pytest.mark.integration
    def test_watsonx_model_integration() -> None:
        from crewai.llms import WatsonxLLM
        
        llm = WatsonxLLM(model="watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8")
        assert llm.model == "watsonx/meta-llama/llama-4-maverick-17b-128e-instruct-fp8"
        assert llm.provider == "watson"

General Recommendations

  1. Improved Documentation:

    • Enhance the documentation for MODELS to clarify its structure and purpose, aiding future code maintainers.
  2. Validation:

    • Introduce validation checks to ensure that selections only include valid models, improving robustness.
  3. Code Organization:

    • Consider moving model-specific constants to a dedicated models.py file as the application grows, helping maintain clarity.

Conclusion

The PR accomplishes its goals with sufficient test coverage and clarity in implementation. However, addressing the suggested improvements—particularly focusing on type safety, testing robustness, and documentation—will enhance the overall quality and maintainability of the code. After addressing these recommendations, the changes will be ready for merge.

devin-ai-integration bot and others added 2 commits June 10, 2025 10:20
- Removed unused pytest import
- Removed unused MagicMock import
- All tests continue to pass locally
- Addresses CI lint check failure

Co-Authored-By: João <joao@crewai.com>
- Added type hints to all test functions and variables for better maintainability
- Added parametrized test to verify multiple watsonx models can be selected
- Imported pytest for parametrized testing functionality
- All tests continue to pass locally (6 passed)
- Addresses review suggestions from joaomdmoura

Co-Authored-By: João <joao@crewai.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] llama-4-maverick-17b-128e-instruct-fp8 through watsonx is not supported
1 participant
0