-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: using pydantic model for structured output returning error for Anthropic models. #6766
Closed
Labels
bug
Something isn't working
Comments
hi @dannylee1020 - can you share the request you're making with litellm? We recently moved to use Anthropic Tool use for JSON responses - this might be related |
class TestModel(BaseModel):
first_response: str
res = litellm.completion(
model="claude-3-5-sonnet-20240620",
messages=[
{"role": "system", "content": PROMPT},
{"role": "user","content": QUERY,},
],
response_format=TestModel,
) |
I'm seeing this also. Downgrading to 1.52.5 fixed it. |
able to repro this |
krrishdholakia
added a commit
that referenced
this issue
Nov 19, 2024
…n_schema fixes passing pydantic obj to anthropic Fixes #6766
ishaan-jaff
pushed a commit
that referenced
this issue
Nov 22, 2024
…n_schema fixes passing pydantic obj to anthropic Fixes #6766
ishaan-jaff
pushed a commit
that referenced
this issue
Nov 22, 2024
* fix(anthropic/chat/transformation.py): add json schema as values: json_schema fixes passing pydantic obj to anthropic Fixes #6766 * (feat): Add timestamp_granularities parameter to transcription API (#6457) * Add timestamp_granularities parameter to transcription API * add param to the local test * fix(databricks/chat.py): handle max_retries optional param handling for openai-like calls Fixes issue with calling finetuned vertex ai models via databricks route * build(ui/): add team admins via proxy ui * fix: fix linting error * test: fix test * docs(vertex.md): refactor docs * test: handle overloaded anthropic model error * test: remove duplicate test * test: fix test * test: update test to handle model overloaded error --------- Co-authored-by: Show <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What happened?
Description
litellm.completion
returning error when pydantic model is used for structured output inresponse_format
for Anthropic models. This was tested with both Anthropic and OpenAI models and it works only on OpenAI models without an error. It also works for previous versions < 1.52.8, so I suspect something has changed in the release.Relevant log output
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: