-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create new “How to choose an embedder” guide #3040
Comments
Hey @guimachiavelli 👋 I guess we're in a bit of a situation where "if you have to ask, just use OpenAI". A very rough outline could be:
|
Thanks for the reply, @dureuill, much appreciated! A small follow-up regarding your second point: does the user-provided embedder suggestion applies to documents with no meaningful textual fields, to non-textual queries, or both? I have realised it's not completely clear to me how we accommodate users with non-textual documents. |
Meilisearch does not support non-textual fields natively (you can include an image in a document as base64, or reference it via its URL, but you cannot meaningfully search that document from that image) in documents nor in search requests. As soon as you use a user-provided embedder, you need to provide vectors both in your documents and in your semantic/hybrid search queries. From there any combination of textual/non textual is possible: as image embedding models appear to generally first do image -> text, and then text -> embedding, one can choose to embed either text or image both at indexing and search time. All the embedding operations have to be done outside of Meilisearch, though. |
Recent customer feedback indicates users are struggling to move beyond the basic AI-powered search tutorial and implement hybrid search in their own projects.
Main points to address:
The text was updated successfully, but these errors were encountered: