8000 Add reasoning attribute to Agent class by devin-ai-integration[bot] · Pull Request #2866 · crewAIInc/crewAI · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add reasoning attribute to Agent class #2866

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
May 20, 2025

Conversation

devin-ai-integration[bot]
Copy link
Contributor
@devin-ai-integration devin-ai-integration bot commented May 20, 2025

Add reasoning attribute to Agent class

This PR adds a new reasoning boolean attribute to the Agent class, enabling agents to reflect and create a plan before executing tasks. The implementation follows the same pattern as the existing crew-level planning functionality.

Features

  • Added reasoning boolean attribute to Agent class (default: False)
  • Added max_reasoning_attempts optional attribute to control refinement iterations
  • Created AgentReasoning handler to manage the reasoning process
  • Updated execute_task method to use reasoning when enabled
  • Added tests to verify the functionality works correctly
  • Added documentation to help users understand and utilize the new feature
  • New: Implemented function calling for reasoning when model supports it, with fallback to text parsing
  • New: Moved prompts to translations system for better localization and management

Implementation Details

The reasoning functionality allows agents to:

  1. Reflect on a task and create a detailed plan
  2. Evaluate whether they're ready to execute the task
  3. Refine the plan as necessary until ready or max_reasoning_attempts is reached
  4. Inject the reasoning plan into the task description before execution

The reasoning functionality uses the agent's own LLM to create and refine the plan, ensuring consistency in the agent's thinking process.

Function Calling Implementation

When the agent's LLM supports function calling (determined by llm.supports_function_calling()), the reasoning process will use a structured function schema to get a more reliable response. This implementation:

  • Uses a consistent function schema for both initial plan creation and refinement
  • Includes robust error handling with fallback to text parsing if function calling fails
  • Ensures compatibility with different LLM providers through simplified parameter usage

Translations Integration

All prompts used in the reasoning process have been moved to the translations system in src/crewai/translations/en.json, enabling:

  • Easier prompt management and updates
  • Support for localization to different languages
  • Consistent prompt handling across the codebase

Example Usage

from crewai import Agent, Task, Crew

# Create an agent with reasoning enabled
analyst = Agent(
    role="Data Analyst",
    goal="Analyze data and provide insights",
    backstory="You are an expert data analyst.",
    reasoning=True,
    max_reasoning_attempts=3  # Optional: Set a limit on reasoning attempts
)

# Create a task
analysis_task = Task(
    description="Analyze the provided sales data and identify key trends.",
    expected_output="A report highlighting the top 3 sales trends.",
    agent=analyst
)

# Create a crew and run the task
crew = Crew(agents=[analyst], tasks=[analysis_task])
result = crew.kickoff()

Error Handling Example

try:
    # Create an agent with reasoning enabled
    analyst = Agent(
        role="Data Analyst",
        goal="Analyze data and provide insights",
        backstory="You are an expert data analyst.",
        reasoning=True
    )
    
    # Create and execute a task
    result = analyst.execute_task(Task(
        description="Analyze this dataset",
        expected_output="Key insights"
    ))
except Exception as e:
    print(f"Error during agent reasoning: {str(e)}")
    # Fallback to agent without reasoning
    analyst.reasoning = False
    result = analyst.execute_task(Task(
        description="Analyze this dataset",
        expected_output="Key insights"
    ))

Example Reasoning Output

Reasoning Plan:
I'll solve this data analysis task by following these steps:

1. Understanding the Task:
   - I need to analyze sales data to identify the top 3 trends
   - The output should be a concise report highlighting these trends

2. Key Steps:
   - Load and clean the sales data
   - Perform exploratory data analysis
   - Identify patterns and anomalies
   - Calculate key metrics (growth rates, seasonality, etc.)
   - Prioritize the top 3 most significant trends

3. Approach to Challenges:
   - If data is incomplete, I'll use interpolation techniques
   - For outliers, I'll apply robust statistical methods
   - If trends are unclear, I'll use multiple analysis techniques

4. Tool Usage:
   - I'll use pandas for data manipulation
   - matplotlib/seaborn for visualization
   - statsmodels for time series analysis

5. Expected Outcome:
   - A clear report with the top 3 sales trends
   - Supporting evidence for each trend
   - Actionable insights based on the findings

READY: I am ready to execute the task.

Testing

Added tests to verify:

  • Basic reasoning functionality works
  • Plan refinement works when the agent is not initially ready
  • Max attempts limit is respected
  • Function calling works correctly with proper fallbacks
  • Error handling works as expected

Link to Devin run

https://app.devin.ai/sessions/06fe7cce3d6544cf822705ba656a6d7b

Requested by

Joe Moura (joao@crewai.com)

Co-Authored-By: Joe Moura <joao@crewai.com>
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment: Add Reasoning Attribute to Agent Class

Overview

This PR introduces valuable reasoning capabilities for agents, enabling them to plan and reflect before executing tasks. The changes span four files and include documentation, core functionality, and tests.

Code Quality Findings

Documentation (docs/concepts/reasoning.mdx)

  • Positive Aspects: The documentation is well-structured, providing clear examples and configuration options.
  • Improvements:
    • Include examples of error handling to guide users in managing edge cases effectively.
    • Add example outputs of the reasoning process to enhance user understanding.

Agent Class Changes (src/crewai/agent.py)

  • Positive Aspects: The integration of the reasoning attributes is clean and logically organized.

  • Issues Identified:

    1. Type Hints: The return type for reasoning_output is missing. Adding type hints could improve IDE support and static type checking.
    2. Error Handling: Currently, there is no error handling during the initialization of the reasoning handler, which may lead to runtime exceptions.
  • Suggested Code Improvements:

def execute_task(self, task: Task) -> str:
    try:
        if self.reasoning:
            from crewai.utilities.reasoning_handler import AgentReasoning, AgentReasoningOutput
            
            reasoning_handler = AgentReasoning(task=task, agent=self)
            reasoning_output: AgentReasoningOutput = reasoning_handler._handle_agent_reasoning()
            
            task.description += f"\n\nReasoning Plan:\n{reasoning_output.plan.plan}"
    except Exception as e:
        self.logger.error(f"Error during reasoning process: {str(e)}")
        # Consider whether to proceed without reasoning or raise the error

Reasoning Handler (src/crewai/utilities/reasoning_handler.py)

  • Positive Aspects: The class structure exhibits good separation of concerns and clarity in design.

  • Issues Identified:

    1. The method _handle_agent_reasoning is lengthy and should be divided into smaller methods for better readability and maintainability.
    2. Input validation is currently lacking, which could potentially lead to issues.
    3. Consider making protected methods truly private to enhance encapsulation.
    4. Externalize hardcoded prompts for easier modification and localization.
  • Suggested Code 10000 Improvements:

class AgentReasoning:
    def __init__(self, task: Task, agent: Agent):
        if not task or not agent:
            raise ValueError("Both task and agent must be provided.")
        self.task = task
        self.agent = agent

    def handle_reasoning(self) -> AgentReasoningOutput:
        """Public method for the reasoning process."""
        return self.__handle_agent_reasoning()
        
    def __handle_agent_reasoning(self) -> AgentReasoningOutput:
        initial_plan = self.__create_initial_plan()
        return self.__refine_plan_if_needed(initial_plan)

Tests (tests/agent_reasoning_test.py)

  • Positive Aspects: The test coverage is good across main functionalities.

  • Issues Identified:

    1. There are no tests for edge cases, which are important to validate resilience.
    2. Error handling scenarios are not tested, which is crucial for robust code.
    3. Consider using fixtures for mock responses to improve test clarity.
  • Suggested Code Improvements:

@pytest.fixture
def mock_llm_responses():
    return {
        "ready": "READY: I am ready to execute the task.",
        "not_ready": "NOT READY: I need to refine my plan.",
        "execution": "Task execution result"
    }

def test_agent_reasoning_error_handling():
    """Test error handling during the reasoning process."""
    with pytest.raises(ValueError):
        agent = Agent(reasoning=True)
        task = Task(description="")  # Invalid task
        agent.execute_task(task)

General Recommendations

  1. Logging: Introduce structured logging to capture reasoning states and events.
  2. Configuration Management: Externalize configuration for reasoning prompts to facilitate easier updates.
  3. Performance Optimization: Implement caching for reasoning plans and explore asynchronous support for LLM calls.
  4. Security Considerations:
    • Validate all inputs to mitigate prompt injection risks.
    • Apply rate limiting for reasoning to prevent abuse and excessive loads.
    • Sanitize output before integration into task descriptions.

Impact Analysis

  • Code Quality: The implementation exhibits a strong foundational structure.
  • Testing: More comprehensive test coverage is needed to ensure reliability.
  • Documentation: The documentation is generally clear and helpful.
  • Error Handling: Improvement is required for robust error management.
  • Maintainability: The overall design suggests good maintainability due to clean coding practices.

Final Verdict

This PR effectively introduces a significant feature into the agent architecture, enhancing its capabilities. The recommendation for merging stands, contingent upon addressing the highlighted error handling and testing improvements. The architecture holds promise but requires careful refinement in critical areas to ensure stability and performance.

devin-ai-integration bot and others added 5 commits May 20, 2025 06:17
devin-ai-integration bot and others added 3 commits May 20, 2025 09:05
Co-Authored-By: Joe Moura <joao@crewai.com>
Co-Authored-By: Joe Moura <joao@crewai.com>
@joaomdmoura joaomdmoura merged commit 1ef2213 into main May 20, 2025
9 checks passed
@joaomdmoura joaomdmoura deleted the devin/1747720641-agent-reasoning branch May 20, 2025 14:40
didier-durand pushed a commit to didier-durand/crewAI that referenced this pull request Jun 12, 2025
* Add reasoning attribute to Agent class

Co-Authored-By: Joe Moura <joao@crewai.com>

* Address PR feedback: improve type hints, error handling, refactor reasoning handler, and enhance tests and docs

Co-Authored-By: Joe Moura <joao@crewai.com>

* Implement function calling for reasoning and move prompts to translations

Co-Authored-By: Joe Moura <joao@crewai.com>

* Simplify function calling implementation with better error handling

Co-Authored-By: Joe Moura <joao@crewai.com>

* Enhance system prompts to leverage agent context (role, goal, backstory)

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix lint and type-checker issues

Co-Authored-By: Joe Moura <joao@crewai.com>

* Enhance system prompts to better leverage agent context

Co-Authored-By: Joe Moura <joao@crewai.com>

* Fix backstory access in reasoning handler for Python 3.12 compatibility

Co-Authored-By: Joe Moura <joao@crewai.com>

---------

Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant
0