Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic"
-
Updated
Mar 13, 2024 - Python
Official implementation of LoT paper: "Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic"
The course provides guidance on best practices for prompting and building applications with the powerful open commercial license models of Llama 2.
ThinkBench is an LLM benchmarking tool focused on evaluating the effectiveness of chain-of-thought (CoT) prompting for answering multiple-choice questions.
This repository contains results from my MSc. thesis on "Test Case Generation from User Stories using Generative AI Techniques with LLM Models." Each folder includes generated test cases in PDF, detailed metrics scores of data in Excel sheets, and visual graphs, offering a comprehensive view of the experiments in images folder and their outcomes.
The goal of the proposed research is to investigate how synthetic questions affect the performance of french statutory article retrieval, a task aims to automatically retrieve law articles relevant to a legal question.
Add a description, image, and links to the chain-of-thought-prompting topic page so that developers can more easily learn about it.
To associate your repository with the chain-of-thought-prompting topic, visit your repo's landing page and select "manage topics."