You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be awesome to have the document corpus (and the segmented counterpart) used in TREC RAG 2024 as integration to ir_datasets. From the description on the web page, it should be no problem to add this, random access to documents should also be very efficient as the file and byte offset are already encoded in the document identifiers, so I think there should be no problem.
The only question that I would have is: As the document identifiers contain the offsets in the file where a document starts (but not the end), is there maybe already a functionality that seeks to the start and readys the json entry until the closing bracket? If not, I could add this as well with unit tests, should be no problem.
Downloadable content (in ir_datasets/etc/downloads.json)
Download verification action (in .github/workflows/verify_downloads.yml). Only one needed per topid.
Any small public files from NIST (or other potentially troublesome files) mirrored in https://github.com/seanmacavaney/irds-mirror/. Mirrored status properly reflected in downloads.json.
Additional comments/concerns/ideas/etc.
The text was updated successfully, but these errors were encountered:
Dear all, I started a draft pull request (only to indicate that there is some progress): #269
Mainly documentation todos are pending, but as the deadline is close, this might be already useful for others even when the documentation is not yet finalized.
I.e., the main thing for iterating over documents could be already done via (e.g., as covered in the unit tests):
for doc in ir_datasets.load('msmarco-document-v2.1/segmented').docs_iter():
print(doc)
break
Dataset Information:
It would be awesome to have the document corpus (and the segmented counterpart) used in TREC RAG 2024 as integration to ir_datasets. From the description on the web page, it should be no problem to add this, random access to documents should also be very efficient as the file and byte offset are already encoded in the document identifiers, so I think there should be no problem.
The only question that I would have is: As the document identifiers contain the offsets in the file where a document starts (but not the end), is there maybe already a functionality that seeks to the start and readys the json entry until the closing bracket? If not, I could add this as well with unit tests, should be no problem.
Links to Resources:
Dataset ID(s) & supported entities:
msmarco-document-v2.1
: for the original documentsmsmarco-document-v2.1/segmented
: for the segmented documentsChecklist
Mark each task once completed. All should be checked prior to merging a new dataset.
ir_datasets/datasets/[topid].py
)tests/integration/[topid].py
)ir_datasets generate_metadata
command, should appear inir_datasets/etc/metadata.json
)ir_datasets/etc/[topid].yaml
)ir_datasets/etc/downloads.json
).github/workflows/verify_downloads.yml
). Only one needed pertopid
.downloads.json
.Additional comments/concerns/ideas/etc.
The text was updated successfully, but these errors were encountered: