You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Are you also facing issues when using Ollama with Pydantic? It seems that the response from the LLM is often not put in the output format as requested by Pydantic. Hence, the subsequent code in our program throws an error.
In our tests, we used Codestral-22B and the following function to call the LLM inference. We also tested the same with the API from Mistral, and this works. So it seems there is a problem in the interface between Ollama and Pydantic.
Do you have good experience with Ollama and Pydantic? Do you know how we can solve this?
Many thanks for your help!
Pseudo code (not functional, just to show which functions we call)
from langchain_experimental.llms.ollama_functions import OllamaFunctions
llm = OllamaFunctions(base_url=Config.HOST_LLM, temperature=0, model="codestral:latest", format="json")
llm = llm.with_structured_output(MyPydanticOutputClass)
llm.invoke({...}) # pass data required by the LLM to perform inference
We are using Pydantic through the call with_structured_output:
llm.with_structured_output(MyPydanticOutputClass)
So, sometimes there are some fields of the Pydantic object that are not parsed, so the output of the LLM is incomplete, for example:
Got: 3 validation errors for CodeGenerationOutput
dev_mode
field required (type=value_error.missing)
rtos
field required (type=value_error.missing)
is_func_empty
And sometimes producing a more general error:
raise ValueError(
ValueError: Failed to parse a response from codestral-latest output: {}
The text was updated successfully, but these errors were encountered:
do you know from Langchain how it instructs the LLM with the Pydantic model you provide for the response format and how it processes it?
I do have some experience with Pydantic and Ollama within my own library (ollama-instructor), where I instructed the LLM to adhere to the JSON schema of the Pydantic model. And yes, sometimes the models are not able to provide the properties of Pydantic model correctly (e.g. Mistral often has problems to response as list/array of dicts/objects). For this reason I added retries for malformed responses and the possibilities to receive partial models. But I do not know how Langchain handles the response.
You could try to instruct the LLM with the JSON schema while using the ollama client instead of Langchain and see what is coming back from Codestral. And you could try it with my library if this is working for your use case.
To be honest, without knowing the exact response from the LLM it is not that easy. I hope my comment helps anyway. 😊
Hi guys,
Are you also facing issues when using Ollama with Pydantic? It seems that the response from the LLM is often not put in the output format as requested by Pydantic. Hence, the subsequent code in our program throws an error.
In our tests, we used Codestral-22B and the following function to call the LLM inference. We also tested the same with the API from Mistral, and this works. So it seems there is a problem in the interface between Ollama and Pydantic.
Do you have good experience with Ollama and Pydantic? Do you know how we can solve this?
Many thanks for your help!
Pseudo code (not functional, just to show which functions we call)
We are using Pydantic through the call with_structured_output:
llm.with_structured_output(MyPydanticOutputClass)
So, sometimes there are some fields of the Pydantic object that are not parsed, so the output of the LLM is incomplete, for example:
And sometimes producing a more general error:
The text was updated successfully, but these errors were encountered: