LlamaIndex integration (Python) #828
Replies: 7 comments 10 replies
-
I'm really looking forward to it. |
Beta Was this translation helpful? Give feedback.
-
That's an awesome idea. Do you plan to add to the \callbacks folder? |
Beta Was this translation helpful? Give feedback.
-
We got started on the Llamaindex integration (Python for now) and the first version looks promising. We'll test it with some users/customers over the next days, email me if you are interested to try it and give feedback: marc at langfuse.com |
Beta Was this translation helpful? Give feedback.
-
We've just released the first alpha version of this integration (mostly stable but some things might still change before we add it to the documentation and the llamaindex package -> not covered by semver yet, interfaces might still change slightly). Here are previews of the docs in case you are interested to try it:
Any feedback? Please let me know here or email me: marc at langfuse com |
Beta Was this translation helpful? Give feedback.
-
Hey @marcklingen, those are great news first of all! I just tested around a bit, needed a bit to include the huge refactoring from llamaindex 0.9 to 0.10. Your demo script works for me, but within my RAG application, it did not work initially. After testing a bit, I realized I needed to replace all If I set the callback handler individually for each created LlamaIndex instance, I can see retrieval results for the Settings.embed_model = OpenAIEmbedding(**embedding_args, callback_manager=Settings.callback_manager)
Settings.llm = OpenAI(..., callback_manager=Settings.callback_manager)
index = VectorStoreIndex.from_vector_store(..., callback_manager=Settings.callback_manager)
ContextChatEngine.from_defaults(
retriever=index.as_retriever(),
callback_manager=Settings.callback_manager,
) |
Beta Was this translation helpful? Give feedback.
-
Thanks everyone for testing this and providing feedback! We've iterated quite a bit on the implementation to support all Langfuse platform features conveniently. Have a look here: https://langfuse.com/changelog/2024-02-27-llama-index-integration If you have any further feedback, please add it here. Please create issues if you experience any bugs when using the integration so we can fix them asap. |
Beta Was this translation helpful? Give feedback.
-
Create a new idea here to track interest for a LlamaIndex JS/TS integration: https://github.com/orgs/langfuse/discussions/1291 |
Beta Was this translation helpful? Give feedback.
-
Many users approached @maxdeichmann, @clemra and me regarding a Llamaindex integration. Keeping this request here to track the overall interest and discuss a potential implementation.
From #819
We are considering implementing a dedicated observation type for context embedding and retrieval to offer better RAG analytics. The Llamaindex integration would be a good occasion to implement and test these new types as well. However, it is not blocked by it as we can start with spans and move to the new types at a later time.
Beta Was this translation helpful? Give feedback.
All reactions