layout |
---|
home |
MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is an WSDM 2023 Cup challenge that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. These languages have diverse typologies, originate from many different language families, and are associated with varying amounts of available resources — including what we typically characterize as high-resource as well as low-resource languages. The focus of this challenge is monolingual retrieval, where the queries and the corpus are in the same language (e.g., Swahili queries searching for Swahili documents). Our goal is to spur research that will improve retrieval models across a broad continuum of languages, and thus improve information access capabilities for diverse populations around the world, particularly those that have been traditionally underserved.
With the advent and dominance of deep learning and approaches based on neural networks (particularly transformer-based models) in information retrieval and beyond, the importance of large datasets as drivers of progress is well understood. For retrieval models in English, the MS MARCO datasets have had a transformative impact in advancing the field. To stimulate similar advances in multilingual retrieval, we have built the MIRACL 🌍🙌🌏 dataset, comprising human-annotated passage-level relevance judgments on Wikipedia for 18 languages, totaling over 600k+ training pairs. Along with the dataset, WSDM 2023 Cup provides a common evaluation methodology, a venue for a competition-style event with prizes, a leaderboard. To get participants off the ground quickly our team will provide easy-to-reproduce baselines. There will be two tracks in this challenge: "known languages" and "surprise languages". In the first, we will provide data well in advance of the submission deadline. In the second, the identity of the languages (along with data) will only be made available at the last moment. The "surprise languages" task emphasizes the rapid development of language-specific capabilities.
- September 20, 2022: Initial announcement.
- October 19, 2022: Release training and development set of known languages.
- January 5, 2022: Release surprise languages.
- January 5, 2022: Release test-b set of all languages.
The topics and judgment in training and development set are now released, as well as the corpora. Checkout our Github repository and paper for more details!
The following table provides the number of topics (= queries), relevance judgment (= relevance labels) for each (language, split) combination, and the number of passages and Wikipedia articles in the corpora.
{% assign st = site.data.stats %} {% for entry in st %} {% assign key = entry | first %} {% if st[key].header1 %} {% else %} {% if st[key].header2 %} {% else %} {% endif %} {% endif %} {% endfor %}{{ st[key].lang }} | {{ st[key].q_train }} | {{ st[key].q_dev }} | {{ st[key].q_test_a}} | {{ st[key].q_test_b }} | {{ st[key].n_passage }} | {{ st[key].n_article }} | ||||
---|---|---|---|---|---|---|---|---|---|---|
{{ st[key].q_train }} | {{ st[key].j_train }} | {{ st[key].q_dev }} | {{ st[key].j_dev }} | {{ st[key].q_test_a}} | {{ st[key].j_test_a }} | {{ st[key].q_test_b }} | {{ st[key].j_test_b }} | |||
{{ st[key].lang }} | {{ st[key].q_train }} | {{ st[key].j_train }} | {{ st[key].q_dev }} | {{ st[key].j_dev }} | {{ st[key].q_test_a }} | {{ st[key].j_test_a }} | {{ st[key].q_test_b }} | {{ st[key].j_test_b }} | {{ st[key].n_passage }} | {{ st[key].n_article }} |
Descriptive statistics for 🌍🙌🌏 MIRACL. Lang denotes the language and ISO 639‑1 Code of the language; # Q denotes the number of queries; # J denotes the total number relevance judgments (including both positive and negative judgments); # Passages denotes the number of passages in each language and # Articles denotes the number of Wikipedia articles in the same language.
Our challenge follows a standard retrieval setup: test queries will be released (at different points in time for the two tasks), and participants will submit top-k results for each of the queries. These results will be primarily evaluated in terms of effectiveness (i.e., relevance of the responses). We will build a leaderboard that tracks the effectiveness of submissions. More details to follow!
{% assign st = site.data.schedule %} {% for entry in st %} {% assign key = entry | first %} {% if st[key].bold %} {% else %} {% endif %} {% endfor %}
{{ st[key].date }} | {{ st[key].event }} |
---|---|
{% if st[key].old_date %}
|
{{ st[key].event }} |
{% include organizers.html %}