Replies: 1 comment
-
could you file this as an issue with more details to repro @octadion |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I'm using Litellm as a proxy in GraphRAG and trying to utilize Ollama as the model in Litellm. My goal is to route through Litellm when performing the indexing process in GraphRAG, so Ollama is accessed via Litellm.
However, I’m encountering an issue where the output during the indexing process fails when routed through Litellm. An example from the Langfuse logs shows the following:
This causes the indexing process to fail.
On the other hand, if I use Ollama directly in GraphRAG without Litellm, the indexing works perfectly fine. However, I need Litellm in order to trace tokens via callback.
What's puzzling is that for query operations (questions) through Litellm -> Ollama, everything works fine without any issues.
Has anyone experienced a similar issue or have any insight into why the indexing through Litellm is failing, while queries work as expected?
Any help would be greatly appreciated!
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions