WikiChat is an improved RAG. It stops the hallucination of large language models by retrieving data from a corpus.
-
Updated
Oct 7, 2024 - Python
WikiChat is an improved RAG. It stops the hallucination of large language models by retrieving data from a corpus.
Loki: Open-source solution designed to automate the process of verifying factuality
Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
RefChecker provides automatic checking pipeline and benchmark dataset for detecting fine-grained hallucinations generated by Large Language Models.
A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
[Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers
Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"
Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"
Implementation of the paper "FactGraph: Evaluating Factuality in Summarization with Semantic Graph Representations (NAACL 2022)"
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation
The implementation for EMNLP 2023 paper ”Beyond Factuality: A Comprehensive Evaluation of Large Language Models as Knowledge Generators“
Code and data for the ACL 2024 Findings paper "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning"
SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433
Source code of our EMNLP 2024 paper "FactAlign: Long-form Factuality Alignment of Large Language Models"
Code and data for the Dreyer et al (2023) paper on abstractiveness and factuality in abstractive summarization
Dataset: Fighting the COVID-19 Infodemic: Modeling the Perspective of Journalists, Fact-Checkers, Social Media Platforms, Policy Makers, and the Society
Code for paper "Factual Confidence of LLMs: on Reliability and Robustness of Current Estimators"
Event factuality prediction.Trigger state LSTM
Add a description, image, and links to the factuality topic page so that developers can more easily learn about it.
To associate your repository with the factuality topic, visit your repo's landing page and select "manage topics."