-
Notifications
You must be signed in to change notification settings - Fork 197
Home
Nandan Thakur edited this page Jun 29, 2022
·
3 revisions
Welcome to the official Wiki page of the BEIR benchmark. BEIR is a heterogeneous benchmark containing diverse IR tasks. It also provides a common and easy framework for evaluation of your NLP-based retrieval models within the benchmark.
This guide will equip you on how to effectively use the BEIR benchmark for your use-cases.
For more information, checkout our publications:
- BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models (NeurIPS 2021, Datasets and Benchmarks Track)