Any docs on eval in usearch #1
Answered
by
ericfeunekes
ericfeunekes
asked this question in
Q&A
-
Your libraries look awesome! Just wondering if you have docs on how to use the evaluation modules in the usearch Python sdk. I see the sdk APIs but couldn't find anything on how they are intended to be used. |
Beta Was this translation helpful? Give feedback.
Answered by
ericfeunekes
Aug 9, 2024
Replies: 1 comment 1 reply
-
I was actually thinking about extending the python/README.md with more examples, as well as tests for the usearch.eval module. Can you please open an issue for that? I believe it should be easy to draft a solution even with ChatGPT and can look into it a bit later. Thanks! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Done! Even something very minimal showing how the classes are intended to be used together would be awesome.
This is very timely for me because I'm planning to do some testing around chunking strategies. Basically, I have labelled ordered chunks that should be returned from a retriever. I'm going to test it by getting the most similar chunk (first semantically, then edit distance using StringZilla) and then use edit distance as the error (and then scoring with MSE) to show me which chunking strategy gets me closest to what my labellers thing are the right chunks.
Based on what I've seen,
usearch
looks perfect for this, particularly if you have evaluation built right in.