Skip to content

Fix issue #3165: Enhanced error handling for custom OpenAI-compatible endpoints #3166

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

devin-ai-integration[bot]
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented Jul 15, 2025

Fix issue #3165: Enhanced error handling for custom OpenAI-compatible endpoints

Summary

This PR addresses issue #3165 where CrewAI showed generic "❌ LLM Failed" errors instead of propagating specific error details from custom OpenAI-compatible endpoints. The fix enhances the error handling system to capture and display detailed error information including error types, original error messages, and endpoint context.

Key Changes:

  • Enhanced LLMCallFailedEvent with new optional fields: error_type, original_error, endpoint_info
  • Updated LLM.call() exception handling to capture detailed error information for custom endpoints
  • Modified console formatter to display specific error details (error type, endpoint URL, etc.)
  • Fixed event listener to properly pass event objects to the formatter
  • Added comprehensive test suite covering connection errors, authentication errors, and both streaming/non-streaming scenarios

Before: Generic "❌ LLM Failed" message with no actionable details
After: Specific error information like "ConnectionError: Failed to establish connection" with endpoint details

Review & Testing Checklist for Human

🔴 High Priority (3 items)

  • Test with real custom endpoints - Verify the fix works with actual non-existent URLs and invalid API keys, not just mocked scenarios
  • Verify existing OpenAI endpoints still work - Ensure no regressions in standard OpenAI API error handling
  • Check enhanced error details display - Confirm that the new error panels with error type and endpoint info actually appear in console output

🟡 Medium Priority (2 items)

  • Test streaming scenarios - Verify both streaming and non-streaming custom endpoints handle errors correctly
  • Backward compatibility check - Ensure existing code that relies on the error field still functions normally

Recommended End-to-End Test Plan:

  1. Run python reproduce_issue_3165.py to verify the fix
  2. Test with a real invalid custom endpoint (e.g., base_url="https://invalid-endpoint.com/v1")
  3. Test with real invalid API key for a valid custom endpoint
  4. Verify normal OpenAI API calls still work and show appropriate errors
  5. Test both Agent-level and direct LLM calls with custom endpoints

Diagram

%%{ init : { "theme" : "default" }}%%
graph TD
    A["src/crewai/llm.py<br/>LLM.call()"]:::major-edit
    B["src/crewai/utilities/events/<br/>llm_events.py<br/>LLMCallFailedEvent"]:::major-edit
    C["src/crewai/utilities/events/<br/>utils/console_formatter.py<br/>handle_llm_call_failed()"]:::major-edit
    D["src/crewai/utilities/events/<br/>event_listener.py<br/>on_llm_call_failed()"]:::minor-edit
    E["tests/test_custom_endpoint_<br/>error_handling.py"]:::context
    F["reproduce_issue_3165.py"]:::context


    A -->|"emits enhanced<br/>error event"| B
    B -->|"event passed to<br/>listener"| D
    D -->|"calls formatter<br/>with event object"| C
    C -->|"displays enhanced<br/>error details"| G["Console Output"]:::context
    E -->|"tests"| A
    E -->|"tests"| B
    E -->|"tests"| C
    F -->|"reproduces<br/>issue"| A

    subgraph Legend
        L1[Major Edit]:::major-edit
        L2[Minor Edit]:::minor-edit
        L3[Context/No Edit]:::context
    end

    classDef major-edit fill:#90EE90
    classDef minor-edit fill:#87CEEB
    classDef context fill:#FFFFFF
Loading

Notes

  • Session Details: This work was completed in Devin session https://app.devin.ai/sessions/b71e870ca7b844f4bb8872878aaf9d96, requested by João ([email protected])
  • CI Status: All 10 CI checks are passing (Python 3.10-3.13, lint, security, type checking)
  • Backward Compatibility: The new fields in LLMCallFailedEvent are optional, maintaining compatibility with existing code
  • Testing Note: While comprehensive unit tests were added, real-world testing with actual custom endpoints is crucial for final validation
  • Risk Assessment: This touches core error handling logic, so thorough testing of both success and failure scenarios is recommended before merging

… endpoints

- Enhanced LLMCallFailedEvent with error_type, original_error, and endpoint_info fields
- Updated LLM.call() to capture detailed error information for custom endpoints
- Enhanced console formatter to display specific error details instead of generic 'LLM Failed'
- Added comprehensive tests covering connection errors, authentication errors, and streaming responses
- Maintains backward compatibility with existing error handling
- Includes reproduction script to verify the fix

Co-Authored-By: Jo\u00E3o <[email protected]>
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

devin-ai-integration bot and others added 2 commits July 15, 2025 10:05
- Updated on_llm_call_failed to pass the event object to handle_llm_call_failed
- This enables the console formatter to display enhanced error details
- Completes the integration of enhanced error handling for custom endpoints

Co-Authored-By: Jo\u00E3o <[email protected]>
Copy link
Contributor Author

Closing due to inactivity for more than 7 days. Configure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants