Anserini is a toolkit for reproducible information retrieval research. By building on Lucene, we aim to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Among other goals, our effort aims to be the opposite of this.* Anserini grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.
Anserini is packaged in a self-contained fatjar, which also provides the simplest way to get started. Assuming you've already got Java installed, fetch the fatjar:
wget https://repo1.maven.org/maven2/io/anserini/anserini/0.24.2/anserini-0.24.2-fatjar.jar
The follow commands will generate a SPLADE++ ED run with the dev queries (encoded using ONNX) on the MS MARCO passage corpus:
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection \
-index msmarco-v1-passage-splade-pp-ed \
-topics msmarco-v1-passage-dev \
-encoder SpladePlusPlusEnsembleDistil \
-output run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt \
-impact -pretokenized
To evaluate:
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt
See below for instructions on using Anserini to reproduce runs from MS MARCO passage and BEIR, all directly from the fatjar!
Regressions directly from the fatjar: MS MARCO passage
Currently, Anserini provides support for the following models:
- BM25
- SPLADE++ EnsembleDistil: pre-encoded queries and ONNX query encoding
- cosDPR-distil: pre-encoded queries and ONNX query encoding
- BGE-base-en-v1.5: pre-encoded queries and ONNX query encoding
The following snippet will generate the complete set of results for MS MARCO passage:
# BM25
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection -index msmarco-v1-passage -topics ${t} -output run.${t}.bm25.txt -threads 16 -bm25
done
# SPLADE++ ED
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
# Using pre-encoded queries
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection -index msmarco-v1-passage-splade-pp-ed -topics ${t}-splade-pp-ed -output run.${t}.splade-pp-ed-pre.txt -threads 16 -impact -pretokenized
# Using ONNX
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection -index msmarco-v1-passage-splade-pp-ed -topics ${t} -encoder SpladePlusPlusEnsembleDistil -output run.${t}.splade-pp-ed-onnx.txt -threads 16 -impact -pretokenized
done
# cosDPR-distil
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
# Using pre-encoded queries, full index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil -topics ${t}-cos-dpr-distil -output run.${t}.cos-dpr-distil-full-pre.txt -threads 16 -efSearch 1000
# Using pre-encoded queries, quantized index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil-quantized -topics ${t}-cos-dpr-distil -output run.${t}.cos-dpr-distil-quantized-pre.txt -threads 16 -efSearch 1000
# Using ONNX, full index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil -topics ${t} -encoder CosDprDistil -output run.${t}.cos-dpr-distil-full-onnx.txt -threads 16 -efSearch 1000
# Using ONNX, quantized index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-cos-dpr-distil-quantized -topics ${t} -encoder CosDprDistil -output run.${t}.cos-dpr-distil-quantized-onnx.txt -threads 16 -efSearch 1000
done
# BGE-base-en-v1.5
TOPICS=(msmarco-v1-passage-dev dl19-passage dl20-passage); for t in "${TOPICS[@]}"
do
# Using pre-encoded queries, full index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5 -topics ${t}-bge-base-en-v1.5 -output run.${t}.bge-base-en-v1.5-full-pre.txt -threads 16 -efSearch 1000
# Using pre-encoded queries, quantized index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5-quantized -topics ${t}-bge-base-en-v1.5 -output run.${t}.bge-base-en-v1.5-quantized-pre.txt -threads 16 -efSearch 1000
# Using ONNX, full index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5 -topics ${t} -encoder BgeBaseEn15 -output run.${t}.bge-base-en-v1.5-full-onnx.txt -threads 16 -efSearch 1000
# Using ONNX, quantized index
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index msmarco-v1-passage-bge-base-en-v1.5-quantized -topics ${t} -encoder BgeBaseEn15 -output run.${t}.bge-base-en-v1.5-quantized-onnx.txt -threads 16 -efSearch 1000
done
Here are the expected scores (dev using MRR@10, DL19 and DL20 using nDCG@10):
dev | DL19 | DL20 | |
---|---|---|---|
BM25 | 0.1840 | 0.5058 | 0.4796 |
SPLADE++ ED (pre-encoded) | 0.3830 | 0.7317 | 0.7198 |
SPLADE++ ED (ONNX) | 0.3828 | 0.7308 | 0.7197 |
cos-DPR: full HNSW (pre-encoded) | 0.3887 | 0.7250 | 0.7025 |
cos-DPR: quantized HNSW (pre-encoded) | 0.3897 | 0.7240 | 0.7004 |
cos-DPR: full HNSW ONNX) | 0.3887 | 0.7250 | 0.7025 |
cos-DPR: quantized HNSW (ONNX) | 0.3899 | 0.7247 | 0.6996 |
BGE-base-en-v1.5: full HNSW (pre-encoded) | 0.3574 | 0.7065 | 0.6780 |
BGE-base-en-v1.5: quantized HNSW (pre-encoded) | 0.3572 | 0.7016 | 0.6738 |
BGE-base-en-v1.5: full HNSW (ONNX) | 0.3575 | 0.7016 | 0.6768 |
BGE-base-en-v1.5: quantized HNSW (ONNX) | 0.3575 | 0.7017 | 0.6767 |
And here's the snippet of code to perform the evaluation (which will yield the results above):
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.dl19-passage.txt
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.dl20-passage.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bm25.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.bm25.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.bm25.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.splade-pp-ed-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.splade-pp-ed-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.splade-pp-ed-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.splade-pp-ed-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.splade-pp-ed-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.splade-pp-ed-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-full-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.cos-dpr-distil-full-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.cos-dpr-distil-full-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-quantized-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.cos-dpr-distil-quantized-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.cos-dpr-distil-quantized-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-full-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.cos-dpr-distil-full-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.cos-dpr-distil-full-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.cos-dpr-distil-quantized-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.cos-dpr-distil-quantized-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.cos-dpr-distil-quantized-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-full-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.bge-base-en-v1.5-full-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.bge-base-en-v1.5-full-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-quantized-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.bge-base-en-v1.5-quantized-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.bge-base-en-v1.5-quantized-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-full-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.bge-base-en-v1.5-full-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.bge-base-en-v1.5-full-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -M 10 -m recip_rank qrels.msmarco-passage.dev-subset.txt run.msmarco-v1-passage-dev.bge-base-en-v1.5-quantized-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl19-passage.txt run.dl19-passage.bge-base-en-v1.5-quantized-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -m ndcg_cut.10 -c qrels.dl20-passage.txt run.dl20-passage.bge-base-en-v1.5-quantized-onnx.txt
Regressions directly from the fatjar: BEIR
Currently, Anserini provides support for the following models:
- Flat = BM25, "flat" bag-of-words baseline
- MF = BM25, "multifield" bag-of-words baseline
- S = SPLADE++ EnsembleDistil:
- Pre-encoded queries (Sp)
- ONNX query encoding (So)
- D = BGE-base-en-v1.5
- Pre-encoded queries (Dp)
- ONNX query encoding (Do)
The following snippet will generate the complete set of results for BEIR:
CORPORA=(trec-covid bioasq nfcorpus nq hotpotqa fiqa signal1m trec-news robust04 arguana webis-touche2020 cqadupstack-android cqadupstack-english cqadupstack-gaming cqadupstack-gis cqadupstack-mathematica cqadupstack-physics cqadupstack-programmers cqadupstack-stats cqadupstack-tex cqadupstack-unix cqadupstack-webmasters cqadupstack-wordpress quora dbpedia-entity scidocs fever climate-fever scifact); for c in "${CORPORA[@]}"
do
# "flat" indexes
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.flat -topics beir-${c} -output run.beir.${c}.flat.txt -bm25 -removeQuery
# "multifield" indexes
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.multifield -topics beir-${c} -output run.beir.${c}.multifield.txt -bm25 -removeQuery -fields contents=1.0 title=1.0
# SPLADE++ ED, pre-encoded queries
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.splade-pp-ed -topics beir-${c}.splade-pp-ed -output run.beir.${c}.splade-pp-ed-pre.txt -impact -pretokenized -removeQuery
# SPLADE++ ED, ONNX
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchCollection -index beir-v1.0.0-${c}.splade-pp-ed -topics beir-${c} -encoder SpladePlusPlusEnsembleDistil -output run.beir.${c}.splade-pp-ed-onnx.txt -impact -pretokenized -removeQuery
# BGE-base-en-v1.5, pre-encoded queries
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index beir-v1.0.0-${c}.bge-base-en-v1.5 -topics beir-${c}.bge-base-en-v1.5 -output run.beir.${c}.bge-pre.txt -threads 16 -efSearch 1000 -removeQuery
# BGE-base-en-v1.5, ONNX
java -cp anserini-0.24.2-fatjar.jar io.anserini.search.SearchHnswDenseVectors -index beir-v1.0.0-${c}.bge-base-en-v1.5 -topics beir-${c} -encoder BgeBaseEn15 -output run.beir.${c}.bge-onnx.txt -threads 16 -efSearch 1000 -removeQuery
done
Here are the expected nDCG@10 scores:
Corpus | Flat | MF | Sp | So | Dp | Do |
---|---|---|---|---|---|---|
trec-covid |
0.5947 | 0.6559 | 0.7274 | 0.7270 | 0.7834 | 0.7835 |
bioasq |
0.5225 | 0.4646 | 0.4980 | 0.4980 | 0.4042 | 0.4042 |
nfcorpus |
0.3218 | 0.3254 | 0.3470 | 0.3473 | 0.3735 | 0.3738 |
nq |
0.3055 | 0.3285 | 0.5378 | 0.5372 | 0.5413 | 0.5415 |
hotpotqa |
0.6330 | 0.6027 | 0.6868 | 0.6868 | 0.7242 | 0.7241 |
fiqa |
0.2361 | 0.2361 | 0.3475 | 0.3473 | 0.4065 | 0.4065 |
signal1m |
0.3304 | 0.3304 | 0.3008 | 0.3006 | 0.2869 | 0.2869 |
trec-news |
0.3952 | 0.3977 | 0.4152 | 0.4169 | 0.4411 | 0.4410 |
robust04 |
0.4070 | 0.4070 | 0.4679 | 0.4651 | 0.4467 | 0.4437 |
arguana |
0.3970 | 0.4142 | 0.5203 | 0.5218 | 0.6361 | 0.6228 |
webis-touche2020 |
0.4422 | 0.3673 | 0.2468 | 0.2464 | 0.2570 | 0.2571 |
cqadupstack-android |
0.3801 | 0.3709 | 0.3904 | 0.3898 | 0.5075 | 0.5076 |
cqadupstack-english |
0.3453 | 0.3321 | 0.4079 | 0.4078 | 0.4855 | 0.4855 |
cqadupstack-gaming |
0.4822 | 0.4418 | 0.4957 | 0.4959 | 0.5965 | 0.5967 |
cqadupstack-gis |
0.2901 | 0.2904 | 0.3150 | 0.3148 | 0.4129 | 0.4133 |
cqadupstack-mathematica |
0.2015 | 0.2046 | 0.2377 | 0.2379 | 0.3163 | 0.3163 |
cqadupstack-physics |
0.3214 | 0.3248 | 0.3599 | 0.3597 | 0.4722 | 0.4724 |
cqadupstack-programmers |
0.2802 | 0.2963 | 0.3401 | 0.3399 | 0.4242 | 0.4238 |
cqadupstack-stats |
0.2711 | 0.2790 | 0.2990 | 0.2980 | 0.3731 | 0.3728 |
cqadupstack-tex |
0.2244 | 0.2086 | 0.2530 | 0.2529 | 0.3115 | 0.3115 |
cqadupstack-unix |
0.2749 | 0.2788 | 0.3167 | 0.3170 | 0.4219 | 0.4220 |
cqadupstack-webmasters |
0.3059 | 0.3008 | 0.3167 | 0.3166 | 0.4065 | 0.4072 |
cqadupstack-wordpress |
0.2483 | 0.2562 | 0.2733 | 0.2718 | 0.3547 | 0.3547 |
quora |
0.7886 | 0.7886 | 0.8343 | 0.8344 | 0.8890 | 0.8876 |
dbpedia-entity |
0.3180 | 0.3128 | 0.4366 | 0.4374 | 0.4077 | 0.4076 |
scidocs |
0.1490 | 0.1581 | 0.1591 | 0.1588 | 0.2170 | 0.2172 |
fever |
0.6513 | 0.7530 | 0.7882 | 0.7879 | 0.8620 | 0.8620 |
climate-fever |
0.1651 | 0.2129 | 0.2297 | 0.2298 | 0.3119 | 0.3117 |
scifact |
0.6789 | 0.6647 | 0.7041 | 0.7036 | 0.7408 | 0.7408 |
And here's the snippet of code to perform the evaluation (which will yield the results above):
CORPORA=(trec-covid bioasq nfcorpus nq hotpotqa fiqa signal1m trec-news robust04 arguana webis-touche2020 cqadupstack-android cqadupstack-english cqadupstack-gaming cqadupstack-gis cqadupstack-mathematica cqadupstack-physics cqadupstack-programmers cqadupstack-stats cqadupstack-tex cqadupstack-unix cqadupstack-webmasters cqadupstack-wordpress quora dbpedia-entity scidocs fever climate-fever scifact); for c in "${CORPORA[@]}"
do
wget https://raw.githubusercontent.com/castorini/anserini-tools/master/topics-and-qrels/qrels.beir-v1.0.0-${c}.test.txt
echo $c
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.flat.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.multifield.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.splade-pp-ed-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.splade-pp-ed-onnx.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.bge-pre.txt
java -cp anserini-0.24.2-fatjar.jar trec_eval -c -m ndcg_cut.10 qrels.beir-v1.0.0-${c}.test.txt run.beir.${c}.bge-onnx.txt
done
Most Anserini features are exposed in the Pyserini Python interface. If you're more comfortable with Python, start there, although Anserini forms an important building block of Pyserini, so it remains worthwhile to learn about Anserini.
You'll need Java 11 and Maven 3.3+ to build Anserini.
Clone our repo with the --recurse-submodules
option to make sure the eval/
submodule also gets cloned (alternatively, use git submodule update --init
).
Then, build using Maven:
mvn clean package appassembler:assemble
The tools/
directory, which contains evaluation tools and other scripts, is actually this repo, integrated as a Git submodule (so that it can be shared across related projects).
Build as follows (you might get warnings, but okay to ignore):
cd tools/eval && tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make && cd ../../..
cd tools/eval/ndeval && make && cd ../../..
With that, you should be ready to go. The onboarding path for Anserini starts here!
Windows tips
If you are using Windows, please use WSL2 to build Anserini. Please refer to the WSL2 Installation document to install WSL2 if you haven't already.
Note that on Windows without WSL2, tests may fail due to encoding issues, see #1466.
A simple workaround is to skip tests by adding -Dmaven.test.skip=true
to the above mvn
command.
See #1121 for additional discussions on debugging Windows build errors.
Anserini is designed to support end-to-end experiments on various standard IR test collections out of the box. Each of these end-to-end regressions starts from the raw corpus, builds the necessary index, performs retrieval runs, and generates evaluation results. See individual pages for details.
MS MARCO V1 Passage Regressions
dev | DL19 | DL20 | |
---|---|---|---|
Unsupervised Sparse | |||
BoW baselines | + | + | + |
Quantized BM25 | ✓ | ✓ | ✓ |
WP baselines | + | + | + |
Huggingface WP baselines | + | + | + |
doc2query | + | ||
doc2query-T5 | + | + | + |
Learned Sparse (uniCOIL family) | |||
uniCOIL noexp | ✓ | ✓ | ✓ |
uniCOIL with doc2query-T5 | ✓ | ✓ | ✓ |
uniCOIL with TILDE | ✓ | ||
Learned Sparse (other) | |||
DeepImpact | ✓ | ||
SPLADEv2 | ✓ | ||
SPLADE++ CoCondenser-EnsembleDistil | ✓ | ✓ | ✓ |
SPLADE++ CoCondenser-EnsembleDistil (ONNX) | ✓ | ✓ | ✓ |
SPLADE++ CoCondenser-SelfDistil | ✓ | ✓ | ✓ |
SPLADE++ CoCondenser-SelfDistil (ONNX) | ✓ | ✓ | ✓ |
Learned Dense (HNSW) | |||
cosDPR-distil w/ HNSW fp32 | ✓ | ✓ | ✓ |
cosDPR-distil w/ HSNW fp32 (ONNX) | ✓ | ✓ | ✓ |
cosDPR-distil w/ HNSW int8 | ✓ | ✓ | ✓ |
cosDPR-distil w/ HSNW int8 (ONNX) | ✓ | ✓ | ✓ |
BGE-base-en-v1.5 w/ HNSW fp32 | ✓ | ✓ | ✓ |
BGE-base-en-v1.5 w/ HNSW fp32 (ONNX) | ✓ | ✓ | ✓ |
BGE-base-en-v1.5 w/ HNSW int8 | ✓ | ✓ | ✓ |
BGE-base-en-v1.5 w/ HNSW int8 (ONNX) | ✓ | ✓ | ✓ |
OpenAI Ada2 w/ HNSW fp32 | ✓ | ✓ | ✓ |
OpenAI Ada2 w/ HNSW int8 | ✓ | ✓ | ✓ |
Cohere English v3.0 w/ HNSW fp32 | ✓ | ✓ | ✓ |
Cohere English v3.0 w/ HNSW int8 | ✓ | ✓ | ✓ |
Learned Dense (Inverted; experimental) | |||
cosDPR-distil w/ "fake words" | ✓ | ✓ | ✓ |
cosDPR-distil w/ "LexLSH" | ✓ | ✓ | ✓ |
Corpora | Size | Checksum |
---|---|---|
Quantized BM25 | 1.2 GB | 0a623e2c97ac6b7e814bf1323a97b435 |
uniCOIL (noexp) | 2.7 GB | f17ddd8c7c00ff121c3c3b147d2e17d8 |
uniCOIL (d2q-T5) | 3.4 GB | 78eef752c78c8691f7d61600ceed306f |
uniCOIL (TILDE) | 3.9 GB | 12a9c289d94e32fd63a7d39c9677d75c |
DeepImpact | 3.6 GB | 73843885b503af3c8b3ee62e5f5a9900 |
SPLADEv2 | 9.9 GB | b5d126f5d9a8e1b3ef3f5cb0ba651725 |
SPLADE++ CoCondenser-EnsembleDistil | 4.2 GB | e489133bdc54ee1e7c62a32aa582bc77 |
SPLADE++ CoCondenser-SelfDistil | 4.8 GB | cb7e264222f2bf2221dd2c9d28190be1 |
cosDPR-distil | 57 GB | e20ffbc8b5e7f760af31298aefeaebbd |
BGE-base-en-v1.5 | 59 GB | 353d2c9e72e858897ad479cca4ea0db1 |
OpenAI-ada2 | 109 GB | a4d843d522ff3a3af7edbee789a63402 |
Cohere embed-english-v3.0 | 38 GB | 06a6e38a0522850c6aa504db7b2617f5 |
MS MARCO V1 Document Regressions
dev | DL19 | DL20 | |
---|---|---|---|
Unsupervised Lexical, Complete Doc* | |||
BoW baselines | + | + | + |
WP baselines | + | + | + |
Huggingface WP baselines | + | + | + |
doc2query-T5 | + | + | + |
Unsupervised Lexical, Segmented Doc* | |||
BoW baselines | + | + | + |
WP baselines | + | + | + |
doc2query-T5 | + | + | + |
Learned Sparse Lexical | |||
uniCOIL noexp | ✓ | ✓ | ✓ |
uniCOIL with doc2query-T5 | ✓ | ✓ | ✓ |
Corpora | Size | Checksum |
---|---|---|
MS MARCO V1 doc: uniCOIL (noexp) | 11 GB | 11b226e1cacd9c8ae0a660fd14cdd710 |
MS MARCO V1 doc: uniCOIL (d2q-T5) | 19 GB | 6a00e2c0c375cb1e52c83ae5ac377ebb |
MS MARCO V2 Passage Regressions
dev | DL21 | DL22 | DL23 | |
---|---|---|---|---|
Unsupervised Lexical, Original Corpus | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Unsupervised Lexical, Augmented Corpus | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Learned Sparse Lexical | ||||
uniCOIL noexp zero-shot | ✓ | ✓ | ✓ | |
uniCOIL with doc2query-T5 zero-shot | ✓ | ✓ | ✓ | |
SPLADE++ CoCondenser-EnsembleDistil | ✓ | ✓ | ✓ | |
SPLADE++ CoCondenser-SelfDistil | ✓ | ✓ | ✓ |
Corpora | Size | Checksum |
---|---|---|
uniCOIL (noexp) | 24 GB | d9cc1ed3049746e68a2c91bf90e5212d |
uniCOIL (d2q-T5) | 41 GB | 1949a00bfd5e1f1a230a04bbc1f01539 |
SPLADE++ CoCondenser-EnsembleDistil | 66 GB | 2cdb2adc259b8fa6caf666b20ebdc0e8 |
SPLADE++ CoCondenser-SelfDistil) | 76 GB | 061930dd615c7c807323ea7fc7957877 |
MS MARCO V2 Document Regressions
dev | DL21 | DL22 | DL23 | |
---|---|---|---|---|
Unsupervised Lexical, Complete Doc | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Unsupervised Lexical, Segmented Doc | ||||
baselines | + | + | + | + |
doc2query-T5 | + | + | + | + |
Learned Sparse Lexical | ||||
uniCOIL noexp zero-shot | ✓ | ✓ | ||
uniCOIL with doc2query-T5 zero-shot | ✓ | ✓ |
Corpora | Size | Checksum |
---|---|---|
MS MARCO V2 doc: uniCOIL (noexp) | 55 GB | 97ba262c497164de1054f357caea0c63 |
MS MARCO V2 doc: uniCOIL (d2q-T5) | 72 GB | c5639748c2cbad0152e10b0ebde3b804 |
BEIR (v1.0.0) Regressions
Key:
- F1 = "flat" baseline (Lucene analyzer)
- F2 = "flat" baseline (pre-tokenized with
bert-base-uncased
tokenizer) - MF = "multifield" baseline (Lucene analyzer)
- U1 = uniCOIL (noexp)
- S1 = SPLADE++ CoCondenser-EnsembleDistil: pre-encoded queries (✓), ONNX (O)
- D1 = BGE-base-en-v1.5
- D1o: original HNSW indexes: pre-encoded queries (✓), ONNX (O)
- D1q: quantized HNSW indexes: pre-encoded queries (✓), ONNX (O)
See instructions below the table for how to reproduce results for a model on all BEIR corpora "in one go".
Corpus | F1 | F2 | MF | U1 | S1 | D1o | D1q |
---|---|---|---|---|---|---|---|
TREC-COVID | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
BioASQ | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
NFCorpus | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
NQ | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
HotpotQA | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
FiQA-2018 | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
Signal-1M(RT) | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
TREC-NEWS | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
Robust04 | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
ArguAna | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
Touche2020 | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Android | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-English | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Gaming | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Gis | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Mathematica | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Physics | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Programmers | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Stats | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Tex | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Unix | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Webmasters | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
CQADupStack-Wordpress | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
Quora | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
DBPedia | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
SCIDOCS | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
FEVER | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
Climate-FEVER | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
SciFact | ✓ | ✓ | ✓ | ✓ | ✓ O | ✓ O | ✓ O |
To reproduce the SPLADE++ CoCondenser-EnsembleDistil results, start by downloading the collection:
wget https://rgw.cs.uwaterloo.ca/pyserini/data/beir-v1.0.0-splade-pp-ed.tar -P collections/
tar xvf collections/beir-v1.0.0-splade-pp-ed.tar -C collections/
The tarball is 42 GB and has MD5 checksum 9c7de5b444a788c9e74c340bf833173b
.
Once you've unpacked the data, the following commands will loop over all BEIR corpora and run the regressions:
MODEL="splade-pp-ed"; CORPORA=(trec-covid bioasq nfcorpus nq hotpotqa fiqa signal1m trec-news robust04 arguana webis-touche2020 cqadupstack-android cqadupstack-english cqadupstack-gaming cqadupstack-gis cqadupstack-mathematica cqadupstack-physics cqadupstack-programmers cqadupstack-stats cqadupstack-tex cqadupstack-unix cqadupstack-webmasters cqadupstack-wordpress quora dbpedia-entity scidocs fever climate-fever scifact); for c in "${CORPORA[@]}"
do
echo "Running $c..."
python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-${c}-${MODEL} > logs/log.beir-v1.0.0-${c}-${MODEL} 2>&1
done
You can verify the results by examining the log files in logs/
.
For the other models, modify the above commands as follows:
Key | Corpus | Checksum | MODEL |
---|---|---|---|
F1 | corpus |
faefd5281b662c72ce03d22021e4ff6b |
flat |
F2 | corpus-wp |
3cf8f3dcdcadd49362965dd4466e6ff2 |
flat-wp |
MF | corpus |
faefd5281b662c72ce03d22021e4ff6b |
multifield |
U1 | unicoil-noexp |
4fd04d2af816a6637fc12922cccc8a83 |
unicoil-noexp |
S1 | splade-pp-ed |
9c7de5b444a788c9e74c340bf833173b |
splade-pp-ed |
D1 | bge-base-en-v1.5 |
e4e8324ba3da3b46e715297407a24f00 |
bge-base-en-v1.5-hnsw |
The "Corpus" above should be substituted into the full file name beir-v1.0.0-${corpus}.tar
, e.g., beir-v1.0.0-bge-base-en-v1.5.tar
.
Cross-lingual and Multi-lingual Regressions
- Regressions for Mr. TyDi (v1.1) baselines: ar, bn, en, fi, id, ja, ko, ru, sw, te, th
- Regressions for MIRACL (v1.0) baselines: ar, bn, en, es, fa, fi, fr, hi, id, ja, ko, ru, sw, te, th, zh
- Regressions for TREC 2022 NeuCLIR Track BM25 (query translation): Persian, Russian, Chinese
- Regressions for TREC 2022 NeuCLIR Track BM25 (document translation): Persian, Russian, Chinese
- Regressions for TREC 2022 NeuCLIR Track SPLADE (query translation): Persian, Russian, Chinese
- Regressions for TREC 2022 NeuCLIR Track SPLADE (document translation): Persian, Russian, Chinese
- Regressions for HC4 (v1.0) baselines on HC4 corpora: Persian, Russian, Chinese
- Regressions for HC4 (v1.0) baselines on original NeuCLIR22 corpora: Persian, Russian, Chinese
- Regressions for HC4 (v1.0) baselines on translated NeuCLIR22 corpora: Persian, Russian, Chinese
- Regressions for NTCIR-8 ACLIA (IR4QA subtask, Monolingual Chinese)
- Regressions for CLEF 2006 Monolingual French
- Regressions for TREC 2002 Monolingual Arabic
- Regressions for FIRE 2012 monolingual baselines: Bengali, Hindi, English
- Regressions for CIRAL (v1.0) BM25 (query translation): Hausa, Somali, Swahili, Yoruba
- Regressions for CIRAL (v1.0) BM25 (document translation): Hausa, Somali, Swahili, Yoruba
Other Regressions
- Regressions for Disks 1 & 2 (TREC 1-3), Disks 4 & 5 (TREC 7-8, Robust04), AQUAINT (Robust05)
- Regressions for the New York Times Corpus (Core17), the Washington Post Corpus (Core18)
- Regressions for Wt10g, Gov2
- Regressions for ClueWeb09 (Category B), ClueWeb12-B13, ClueWeb12
- Regressions for Tweets2011 (MB11 & MB12), Tweets2013 (MB13 & MB14)
- Regressions for Complex Answer Retrieval (CAR17): v1.5, v2.0, v2.0 with doc2query
- Regressions for TREC News Tracks (Background Linking Task): 2018, 2019, 2020
- Regressions for FEVER Fact Verification
- Regressions for DPR Wikipedia QA baselines: 100-word splits, 6/3 sliding window sentences
The experiments described below are not associated with rigorous end-to-end regression testing and thus provide a lower standard of reproducibility. For the most part, manual copying and pasting of commands into a shell is required to reproduce our results.
MS MARCO V1
- Reproducing BM25 baselines for MS MARCO Passage Ranking
- Reproducing BM25 baselines for MS MARCO Document Ranking
- Reproducing baselines for the MS MARCO Document Ranking Leaderboard
- Reproducing doc2query results (MS MARCO Passage Ranking and TREC-CAR)
- Reproducing docTTTTTquery results (MS MARCO Passage and Document Ranking)
- Notes about reproduction issues with MS MARCO Document Ranking w/ docTTTTTquery
TREC-COVID and CORD-19
Other Experiments and Features
- Working with the 20 Newsgroups Dataset
- Guide to BM25 baselines for the FEVER Fact Verification Task
- Guide to reproducing "Neural Hype" Experiments
- Guide to running experiments on the AI2 Open Research Corpus
- Experiments from Yang et al. (JDIQ 2018)
- Runbooks for TREC 2018: [Anserini group] [h2oloo group]
- Runbook for ECIR 2019 paper on axiomatic semantic term matching
- Runbook for ECIR 2019 paper on cross-collection relevance feedback
- Support for approximate nearest-neighbor search on dense vectors with inverted indexes
If you've found Anserini to be helpful, we have a simple request for you to contribute back.
In the course of reproducing baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the page for Disks 4 & 5.
Reproducibility is important to us, and we'd like to know about successes as well as failures.
Since the regression documentation is auto-generated, pull requests should be sent against the raw templates.
Then the regression documentation can be generated using the bin/build.sh
script.
In turn, you'll be recognized as a contributor.
Beyond that, there are always open issues we would appreciate help on!
- v0.24.2: February 27, 2024 [Release Notes]
- v0.24.1: January 27, 2024 [Release Notes]
- v0.24.0: December 28, 2023 [Release Notes]
- v0.23.0: November 16, 2023 [Release Notes]
- v0.22.1: October 18, 2023 [Release Notes]
- v0.22.0: August 28, 2023 [Release Notes]
- v0.21.0: March 31, 2023 [Release Notes]
- v0.20.0: January 20, 2023 [Release Notes]
older... (and historic notes)
- v0.16.2: December 12, 2022 [Release Notes]
- v0.16.1: November 2, 2022 [Release Notes]
- v0.16.0: October 23, 2022 [Release Notes]
- v0.15.0: September 22, 2022 [Release Notes]
- v0.14.4: July 31, 2022 [Release Notes]
- v0.14.3: May 9, 2022 [Release Notes]
- v0.14.2: March 24, 2022 [Release Notes]
- v0.14.1: February 27, 2022 [Release Notes]
- v0.14.0: January 10, 2022 [Release Notes]
- v0.13.5: November 2, 2021 [Release Notes]
- v0.13.4: October 22, 2021 [Release Notes]
- v0.13.3: August 22, 2021 [Release Notes]
- v0.13.2: July 20, 2021 [Release Notes]
- v0.13.1: June 29, 2021 [Release Notes]
- v0.13.0: June 22, 2021 [Release Notes]
- v0.12.0: April 29, 2021 [Release Notes]
- v0.11.0: February 13, 2021 [Release Notes]
- v0.10.1: January 8, 2021 [Release Notes]
- v0.10.0: November 25, 2020 [Release Notes]
- v0.9.4: June 25, 2020 [Release Notes]
- v0.9.3: May 26, 2020 [Release Notes]
- v0.9.2: May 14, 2020 [Release Notes]
- v0.9.1: May 6, 2020 [Release Notes]
- v0.9.0: April 18, 2020 [Release Notes]
- v0.8.1: March 22, 2020 [Release Notes]
- v0.8.0: March 11, 2020 [Release Notes]
- v0.7.2: January 25, 2020 [Release Notes]
- v0.7.1: January 9, 2020 [Release Notes]
- v0.7.0: December 13, 2019 [Release Notes]
- v0.6.0: September 6, 2019 [Release Notes][Known Issues]
- v0.5.1: June 11, 2019 [Release Notes]
- v0.5.0: June 5, 2019 [Release Notes]
- v0.4.0: March 4, 2019 [Release Notes]
- v0.3.0: December 16, 2018 [Release Notes]
- v0.2.0: September 10, 2018 [Release Notes]
- v0.1.0: July 4, 2018 [Release Notes]
- Anserini was upgraded to Lucene 9.3 at commit
272565
(8/2/2022): this upgrade created backward compatibility issues, see #1952. Anserini will automatically detect Lucene 8 indexes and disable consistent tie-breaking to avoid runtime errors. However, Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes. Lucene 8 code will not run on Lucene 9 indexes. Pyserini has also been upgraded and similar issues apply: Lucene 9 code running on Lucene 8 indexes may give slightly different results than Lucene 8 code running on Lucene 8 indexes. - Anserini was upgraded to Java 11 at commit
17b702d
(7/11/2019) from Java 8. Maven 3.3+ is also required. - Anserini was upgraded to Lucene 8.0 as of commit
75e36f9
(6/12/2019); prior to that, the toolkit uses Lucene 7.6. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8. As a result of this upgrade, results of all regressions have changed slightly. To reproducible old results from Lucene 7.6, use v0.5.1.
- Jimmy Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig Macdonald, Sebastiano Vigna. Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. ECIR 2016.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the Use of Lucene for Information Retrieval Research. SIGIR 2017.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality, 10(4), Article 16, 2018.
This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.