forked from castorini/anserini
-
Notifications
You must be signed in to change notification settings - Fork 0
/
msmarco-doc-segmented.template
148 lines (106 loc) · 6.77 KB
/
msmarco-doc-segmented.template
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
# Anserini Regressions: MS MARCO Document Ranking
**Models**: various bag-of-words approaches on segmented documents
This page documents regression experiments on the [MS MARCO document ranking task](https://github.com/microsoft/MSMARCO-Document-Ranking), which is integrated into Anserini's regression testing framework.
Note that there are four different bag-of-words regression conditions for this task, and this page describes the following:
+ **Indexing Condition:** each MS MARCO document is first segmented into passages, each passage is treated as a unit of indexing
+ **Expansion Condition:** none
All four conditions are described in detail [here](https://github.com/castorini/docTTTTTquery), in the context of doc2query-T5.
In the passage (i.e., segment) indexing condition, we select the score of the highest-scoring passage from a document as the score for that document to produce a document ranking; this is known as the MaxP technique.
The exact configurations for these regressions are stored in [this YAML file](${yaml}).
Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead.
Note that in November 2021 we discovered issues in our regression tests, documented [here](${root_path}/docs/experiments-msmarco-doc-doc2query-details.md).
As a result, we have had to rebuild all our regressions from the raw corpus.
These new versions yield end-to-end scores that are slightly different, so if numbers reported in a paper do not exactly match the numbers here, this may be the reason.
From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
```
python src/main/python/run_regression.py --index --verify --search --regression ${test_name}
```
## Indexing
Typical indexing command:
```
${index_cmds}
```
The directory `/path/to/msmarco-doc-segmented/` should be a directory containing the segmented corpus in Anserini's jsonl format.
See [this page](${root_path}/docs/experiments-msmarco-doc-doc2query-details.md) for how to prepare the corpus.
For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
## Retrieval
Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
The regression experiments here evaluate on the 5193 dev set questions.
After indexing has completed, you should be able to perform retrieval as follows:
```
${ranking_cmds}
```
Evaluation can be performed using `trec_eval`:
```
${eval_cmds}
```
## Effectiveness
With the above commands, you should be able to reproduce the following results:
${effectiveness}
Explanation of settings:
+ The setting "default" refers the default BM25 settings of `k1=0.9`, `b=0.4`.
+ The setting "tuned" refers to `k1=2.16`, `b=0.61`, tuned in 2020/12 using the MS MARCO document sparse judgments to optimize for recall@100 (i.e., for first-stage retrieval).
In these runs, we are retrieving the top 1000 hits for each query and using `trec_eval` to evaluate all 1000 hits.
Since we're in the passage condition, we fetch the 10000 passages and select the top 1000 documents using MaxP.
This lets us measure R@100 and R@1000; the latter is particularly important when these runs are used as first-stage retrieval.
Beware, an official MS MARCO document ranking task leaderboard submission comprises only 100 hits per query.
See [this page](${root_path}/docs/experiments-msmarco-doc-leaderboard.md) for details on Anserini baseline runs that were submitted to the official leaderboard.
The MaxP passage retrieval functionality is available in `SearchCollection`.
To generate an MS MARCO submission with the BM25 default parameters, corresponding to "BM25 (default)" above:
```bash
$ target/appassembler/bin/SearchCollection -topicreader TsvString \
-topics tools/topics-and-qrels/topics.msmarco-doc.dev.txt \
-index indexes/lucene-index.msmarco-doc-segmented/ \
-output runs/run.msmarco-doc-segmented.bm25-default.txt -format msmarco \
-bm25 -bm25.k1 0.9 -bm25.b 0.4 -hits 1000 \
-selectMaxPassage -selectMaxPassage.delimiter "#" -selectMaxPassage.hits 100
$ python tools/scripts/msmarco/msmarco_doc_eval.py \
--judgments tools/topics-and-qrels/qrels.msmarco-doc.dev.txt \
--run runs/run.msmarco-doc-segmented.bm25-default.txt
#####################
MRR @100: 0.2682349308946578
QueriesRanked: 5193
#####################
```
This run was _not_ submitted to the MS MARCO document ranking leaderboard, but is reported in the Lin et al. (SIGIR 2021) Pyserini paper.
Note that the above command uses `-format msmarco` to directly generate a run in the MS MARCO output format.
To generate an MS MARCO submission with the BM25 tuned parameters, corresponding to "BM25 (tuned)" above:
```bash
$ target/appassembler/bin/SearchCollection -topicreader TsvString \
-topics tools/topics-and-qrels/topics.msmarco-doc.dev.txt \
-index indexes/lucene-index.msmarco-doc-segmented/ \
-output runs/run.msmarco-doc-segmented.bm25-tuned.txt -format msmarco \
-bm25 -bm25.k1 2.16 -bm25.b 0.61 -hits 1000 \
-selectMaxPassage -selectMaxPassage.delimiter "#" -selectMaxPassage.hits 100
$ python tools/scripts/msmarco/msmarco_doc_eval.py \
--judgments tools/topics-and-qrels/qrels.msmarco-doc.dev.txt \
--run runs/run.msmarco-doc-segmented.bm25-tuned.txt
#####################
MRR @100: 0.2751202109946902
QueriesRanked: 5193
#####################
```
This run corresponds to the MS MARCO document ranking leaderboard entry "Anserini's BM25 (per passage), parameters tuned for recall@100 (k1=2.16, b=0.61)" dated 2021/01/20, and is reported in the Lin et al. (SIGIR 2021) Pyserini paper.
Again, note that the above command uses `-format msmarco` to directly generate a run in the MS MARCO output format.
As of February 2022, following resolution of [#1721](https://github.com/castorini/anserini/issues/1721), BM25 runs for the MS MARCO leaderboard can be generated with the same commands as above.
However, the effectiveness has changed slightly, since we corrected underlying issues with data preparation.
For default parameters (`k1=0.9`, `b=0.4`):
```
$ python tools/scripts/msmarco/msmarco_doc_eval.py \
--judgments tools/topics-and-qrels/qrels.msmarco-doc.dev.txt \
--run runs/run.msmarco-doc-segmented.bm25-default.txt
#####################
MRR @100: 0.26851990908986706
QueriesRanked: 5193
#####################
```
For tuned parameters (`k1=2.16`, `b=0.61`):
```
$ python tools/scripts/msmarco/msmarco_doc_eval.py \
--judgments tools/topics-and-qrels/qrels.msmarco-doc.dev.txt \
--run runs/run.msmarco-doc-segmented.bm25-tuned.txt
#####################
MRR @100: 0.27551963417683756
QueriesRanked: 5193
#####################
```