You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, all MS-MARCO models (bi- and cross-encoders) seem only to be in english. Would there be a multilingual version of MS-MARCo and particularly a french one?
Thanks!
The text was updated successfully, but these errors were encountered:
I just made this comment in another similar issue - it should solve this problem.
Has anyone here tried the newest multilingual Cross Encoder model? It uses multilingual versions of the MiniLM and MSMarco datasets. It doesn't appear to be in the SBert documentation, but I just stumbled upon it while browsing HF. https://huggingface.co/cross-encoder/mmarco-mMiniLMv2-L12-H384-v1
There isn't any benchmark data, but this paper seems to have used a fairly similar process and shows that these multilingual datasets/models provide very competitive results when compared to monolingual datasets. https://arxiv.org/pdf/2108.13897.pdf
Hi,
I found very interesting information about retrieve & re-rank pipeline (https://www.sbert.net/examples/applications/retrieve_rerank/README.html) as a possible way to define a Q&A system.
However, all MS-MARCO models (bi- and cross-encoders) seem only to be in english. Would there be a multilingual version of MS-MARCo and particularly a french one?
Thanks!
The text was updated successfully, but these errors were encountered: