Skip to content

Conversation

Simon-Lind-glitch
Copy link

@Simon-Lind-glitch Simon-Lind-glitch commented Sep 15, 2025

Title

Added MCP execution for chat completion endpoint

Relevant issues

Fixes bug #14268

bild bild

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • [/] My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Ive added support for MCP tool exeuction to the chat completions pipeline.
While I was at it I also fixed the tool format in the requests towards bedrock since it was the model i was testing with.
Reasonably related issues.

Copy link

vercel bot commented Sep 15, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Sep 16, 2025 6:35am

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this specific to bedrock? @Simon-Lind-glitch

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've only gottent his error while using bedrock and did not find any bug regarding this, so we thought we'd isolate this to bedrock

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the issue your closing refers to ollama and all non-openai providers

the fix would be incorrect to apply if it's just for bedrock. Can we please have this be a more generic fix across providers?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modifying the request body should happen via a async_pre_call_hook - this is cleaner than modifying the routing logic - https://docs.litellm.ai/docs/proxy/call_hooks

see -

async def async_pre_call_hook(

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ill get on it

@Simon-Lind-glitch Simon-Lind-glitch marked this pull request as draft September 16, 2025 06:00
@Simon-Lind-glitch
Copy link
Author

i need to tinker with this a bit more. ill get back to you once its ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants