Skip to content
kwang2049 edited this page Jun 29, 2022 · 3 revisions

🍻 The BEIR Benchmark

Welcome to the official Wiki page of the BEIR benchmark. BEIR is a heterogeneous benchmark containing diverse IR tasks. It also provides a common and easy framework for evaluation of your NLP-based retrieval models within the benchmark.

For more information, checkout our publications: