Skip to content

Issue with Ollama and Pydantic  #205

Open
@jonabeysens

Description

@jonabeysens

Hi guys,

Are you also facing issues when using Ollama with Pydantic? It seems that the response from the LLM is often not put in the output format as requested by Pydantic. Hence, the subsequent code in our program throws an error.

In our tests, we used Codestral-22B and the following function to call the LLM inference. We also tested the same with the API from Mistral, and this works. So it seems there is a problem in the interface between Ollama and Pydantic.

Do you have good experience with Ollama and Pydantic? Do you know how we can solve this?

Many thanks for your help!

Pseudo code (not functional, just to show which functions we call)

from langchain_experimental.llms.ollama_functions import OllamaFunctions

llm = OllamaFunctions(base_url=Config.HOST_LLM, temperature=0, model="codestral:latest", format="json")

llm = llm.with_structured_output(MyPydanticOutputClass)

llm.invoke({...}) # pass data required by the LLM to perform inference 

We are using Pydantic through the call with_structured_output:

llm.with_structured_output(MyPydanticOutputClass)

So, sometimes there are some fields of the Pydantic object that are not parsed, so the output of the LLM is incomplete, for example:

Got: 3 validation errors for CodeGenerationOutput
dev_mode
field required (type=value_error.missing)
rtos
field required (type=value_error.missing)
is_func_empty

And sometimes producing a more general error:

raise ValueError(
ValueError: Failed to parse a response from codestral-latest output: {}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions