Skip to content

Latest commit

 

History

History
19 lines (12 loc) · 1.71 KB

README.md

File metadata and controls

19 lines (12 loc) · 1.71 KB

MLissard

Paper repository: MLissard: Multilingual Long and Simple Sequential Reasoning Datasets

Language models are now capable of solving tasks that require dealing with long sequences consisting of hundreds of thousands of tokens. However, they often fail on tasks that require repetitive use of simple rules, even on sequences that are much shorter than those seen during training. For example, state-of-the-art LLMs can find common items in two lists with up to 20 items but fail when lists have 80 items. In this paper, we introduce MLissard, a multilingual benchmark designed to evaluate models' abilities to process and generate texts of varied lengths and offers a mechanism for controlling sequence complexity.

Our evaluation of open-source and proprietary models show a consistent decline in performance across all models and languages as the complexity of the sequence increases. Surprisingly, the use of in-context examples in languages other than English helps increase extrapolation performance significantly.

Datasets

In the data/(task)/ folder you'll find the datasets for evaluation with MLissard. The files are in .json format and present a brief description of the task followed by in-context examples of the target language. The "test_examples" field contains the test examples made up of the input, target, length (len) and the bin that belongs to this length.

Results

In results/(task)/GPT-4/ it is possible to find the answers generated by the GPT-4 and Llama-3 models as well as the ablation tests, the folders are separated by tasks.

Scripts:

The src/(task)/ folder contains .py files for generating new examples or expanding MLissard. To execute: python <task_name>.py --output_path=my_output_path