Knights and knaves problems represent a classic genre of logical puzzles where characters either tell the truth or lie. The objective is to logically deduce each character's identity based on their statements. The challenge arises from the truth-telling or lying behavior, which influences the logical implications of each statement. Solving these puzzles requires not only direct deductions from individual statements, but the ability to assess the truthfulness of statements by reasoning through various hypothetical scenarios. As such, knights and knaves puzzles serve as compelling examples of suppositional reasoning. In this paper, we introduce \emph{TruthQuest}, a benchmark for suppositional reasoning based on the principles of knights and knaves puzzles. Our benchmark presents problems of varying complexity, considering both the number of characters and the types of logical statements involved. Evaluations on \emph{TruthQuest} show that large language models like Llama 3 and Mixtral-8x7B exhibit significant difficulties solving these tasks. A detailed error analysis of the models' output reveals that lower-performing models exhibit a diverse range of reasoning errors, frequently failing to grasp the concept of truth and lies. In comparison, more proficient models primarily struggle with accurately inferring the logical implications of potentially false statements.
- Setup
- Generate Puzzles
- Run Models
- Evaluate Performance
- LLM-Based and Human Annotations
- Human-Annotated CoT Prompts
- License
- Citation
All code was developed and tested on Ubuntu 22.04 with Python 3.11
To run the current code, we recommend to use Poetry:
poetry install # Install dependencies
poetry shell # Activate virtual environment
# Work for a while
deactivate
Please make sure to configure your HuggingFace credentials to download respective models.
To generate puzzles, run the following command:
python gen_data.py --from-yaml
This will generate the corresponding data. Alternatively, you can fetch the data from here.
To run models, run the following command:
python run.py --model <hf-model-name>
In this project, we used the following models:
- meta-llama/Llama-2-7b-chat-hf
- meta-llama/Llama-2-13b-chat-hf
- meta-llama/Llama-2-70b-chat-hf
- meta-llama/Meta-Llama-3-8B-Instruct
- meta-llama/Meta-Llama-3-70B-Instruct
- mistralai/Mixtral-8x7B-Instruct-v0.1
Note that in order to use a new model, you need to add a configuration file in this folder.
To evaluate the performance of the models, run the following command:
python evaluate_conclusion.py
For specific command-line arguments, please refer to the code.
We publish all LLM-based and human annotations in our respective HuggingFace data repository. The TruthQuest dataset can be found here.
We provide up to 8 human-annotated CoT examples for each dataset configuration. Please see this folder for further information.
This work is licensed under a CC BY-SA 4.0 License.
If you find our work helpful, you can cite this paper as:
@article{mondorf2024liar,
title={Liar, Liar, Logical Mire: A Benchmark for Suppositional Reasoning in Large Language Models},
author={Mondorf, Philipp and Plank, Barbara},
journal={arXiv preprint arXiv:2406.12546},
year={2024}
}